Table of Contents¶

  1. Task 1 – Understand and Clean the Data
  2. Task 2 – Data Processing
  3. Task 3 – Classifier Implementation
  4. Task 4 – Performance Evaluation & Interpretation
In [433]:
# Data processing and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns # library -- seaborn
from pyecharts.charts import Bar,Line,Map,Page,Pie # library -- poltly
import plotly.express as px
from pyecharts.globals import SymbolType
import plotly.graph_objects as go
from plotly.subplots import make_subplots
In [434]:
# Sklearn
from sklearn.model_selection import train_test_split, RandomizedSearchCV, RepeatedStratifiedKFold, cross_validate
from sklearn.model_selection import cross_validate
from sklearn.experimental import enable_iterative_imputer  
from sklearn.impute import IterativeImputer
from sklearn.linear_model import BayesianRidge
In [435]:
# pipeline
from sklearn import set_config
from sklearn.pipeline import make_pipeline, Pipeline
from imblearn.pipeline import Pipeline as imbPipeline
from sklearn.compose import ColumnTransformer, make_column_selector
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
In [436]:
# imbalanced processing, sample sampling
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline
from imblearn.combine import SMOTETomek
from imblearn.under_sampling import TomekLinks
In [437]:
# oTHER
from scipy.stats import pointbiserialr
import shap
from scipy.stats import loguniform
In [438]:
# building model
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score
from sklearn.inspection import permutation_importance
In [439]:
import warnings
warnings.filterwarnings("ignore")
In [440]:
df = pd.read_csv("PatientTimeSeries.csv")
In [441]:
df.head(5)
Out[441]:
Patient_id HR O2Sat Temp SBP MAP DBP Resp EtCO2 BaseExcess ... Hgb PTT WBC Fibrinogen Platelets Age Gender HospAdmTime ICULOS SepsisLabel
0 p116812 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN 59.0 1 -6.01 1 0
1 p116812 102.0 100.0 NaN NaN NaN NaN 22.0 NaN NaN ... NaN NaN NaN NaN NaN 59.0 1 -6.01 2 0
2 p116812 102.0 100.0 NaN 99.0 84.0 76.0 18.5 NaN NaN ... NaN NaN NaN NaN NaN 59.0 1 -6.01 3 0
3 p116812 124.0 100.0 NaN 97.0 70.0 55.0 16.0 NaN NaN ... NaN NaN NaN NaN NaN 59.0 1 -6.01 4 0
4 p116812 98.0 100.0 NaN 95.0 73.0 62.0 18.0 NaN NaN ... 7.5 NaN 6.8 NaN 276.0 59.0 1 -6.01 5 0

5 rows × 40 columns

In [442]:
print("There are", df["Patient_id"].nunique(), "patient recordings in the dataset.")
There are 27186 patient recordings in the dataset.
In [443]:
df.columns
Out[443]:
Index(['Patient_id', 'HR', 'O2Sat', 'Temp', 'SBP', 'MAP', 'DBP', 'Resp',
       'EtCO2', 'BaseExcess', 'HCO3', 'FiO2', 'pH', 'PaCO2', 'SaO2', 'AST',
       'BUN', 'Alkalinephos', 'Calcium', 'Chloride', 'Creatinine',
       'Bilirubin_direct', 'Glucose', 'Lactate', 'Magnesium', 'Phosphate',
       'Potassium', 'Bilirubin_total', 'TroponinI', 'Hct', 'Hgb', 'PTT', 'WBC',
       'Fibrinogen', 'Platelets', 'Age', 'Gender', 'HospAdmTime', 'ICULOS',
       'SepsisLabel'],
      dtype='object')
In [444]:
# Step 1: To get all the unique Patient_id
unique_patients = df['Patient_id'].unique()

# Step 2: train/test by patients
train_patients, test_patients = train_test_split(unique_patients, test_size=0.2, random_state=42)

# Step 3: Filter the original data based on Patient_id
train_data = df[df['Patient_id'].isin(train_patients)]
test_data = df[df['Patient_id'].isin(test_patients)]
In [445]:
# TRAIN
X_train = train_data.drop(columns=['SepsisLabel'])
y_train = train_data['SepsisLabel']

# TEST
X_test = test_data.drop(columns=['SepsisLabel'])
y_test = test_data['SepsisLabel']
In [446]:
train_data.shape
Out[446]:
(836224, 40)
In [447]:
X_train.shape
Out[447]:
(836224, 39)
In [448]:
X_test.shape
Out[448]:
(212351, 39)
In [449]:
y_test.value_counts()
Out[449]:
SepsisLabel
0    208644
1      3707
Name: count, dtype: int64
In [450]:
df['SepsisLabel'].value_counts()
Out[450]:
SepsisLabel
0    1029645
1      18930
Name: count, dtype: int64

TASK 1¶

In [451]:
## Vital signs
vital_signs = ['HR', 'O2Sat', 'Temp', 'SBP', 'MAP', 'DBP', 'Resp', 'EtCO2']
colors = ['gold', 'mediumturquoise'] # %colors
plt.figure(figsize=(18, 12))
plt.subplots_adjust(hspace=0.4)

for i, column in enumerate(vital_signs, 1):
    plt.subplot(4, 2, i)
    sns.histplot(data=X_train, x=column, hue=y_train,
                 stat="density", common_norm=False, bins=40, kde=True,palette=colors)
    plt.title(f'Distribution of {column}')
No description has been provided for this image

It can be seen from the bar chart that the differences in various vital signs between patients with sepsis and those without sepsis are generally not significant. However, patients with the disease generally had slightly higher heart rate (HR), body temperature (Temp), and respiratory rate (Resp), which was consistent with the physiological responses during the infection process.

In [16]:
# List of laboratory test variables to plot
lab_values = [
    'BaseExcess', 'HCO3', 'FiO2', 'pH', 'PaCO2', 'SaO2', 'AST', 'BUN',
    'Alkalinephos', 'Calcium', 'Chloride', 'Creatinine', 'Bilirubin_direct',
    'Glucose', 'Lactate', 'Magnesium', 'Phosphate', 'Potassium',
    'Bilirubin_total', 'TroponinI', 'Hct', 'Hgb', 'PTT', 'WBC',
    'Fibrinogen', 'Platelets'
]
colors = ['gold', 'mediumturquoise'] # %colors
# Set up the figure size to accommodate many subplots
plt.figure(figsize=(22, 40))

# Adjust spacing between subplots for better readability
plt.subplots_adjust(hspace=0.5, wspace=0.3)

# Loop over each laboratory variable to plot its distribution
for i, column in enumerate(lab_values, 1):
    plt.subplot(13, 2, i)  # Create a 13-row, 2-column grid of subplots
    sns.histplot(
        data=X_train, x=column, hue=y_train,
        stat="density", common_norm=False, bins=40, kde=True,palette=colors
    )
    plt.title(f'Distribution of {column}', fontsize=12)
    plt.xlabel(column, fontsize=10)
    plt.ylabel("Density", fontsize=10)

# Automatically adjust the layout to avoid overlap
plt.tight_layout()
plt.show()
No description has been provided for this image

Compared with patients who are not ill, the blood oxygen saturation (O₂Sat) and blood pH in patients with sepsis are slightly higher, showing a more alkaline nature.
The concentrations of blood Calcium and Chloride ions are slightly low.Blood indicators such as hematocrit (HCT) and hemoglobin (HGB) are also relatively low, indicating a weak physical function.
The slightly higher level of Fibrinogen may reflect the symptoms of inflammation after infection.

In [17]:
## Demographics
demographic = ['Age', 'Gender', 'HospAdmTime', 'ICULOS']
colors = ['gold', 'mediumturquoise'] # %colors
plt.figure(figsize=(12, 6))
plt.subplots_adjust(hspace=0.4)

for i, column in enumerate(demographic, 1):
    plt.subplot(2, 2, i)
    sns.histplot(data=X_train, x=column, hue=y_train,
                 stat="density", common_norm=False, bins=40, kde=True,palette=colors)
    plt.title(f'Distribution of {column}')
No description has been provided for this image

The disease is more prevalent among the elderly, and the time spent in the ICU is also longer

In [417]:
# choose the color
colors = ['gold', 'mediumturquoise']

# Mapping label
gender_map = {0: 'Female', 1: 'Male'}
label_map = {0: 'Non-Sepsis', 1: 'Sepsis'}

# Create a cross-statistical table: the Gender count under each group of SepsisLabel
ts_gender_sepsis = train_data.groupby(['SepsisLabel', 'Gender']).size().reset_index(name='Count')
ts_gender_sepsis['Gender'] = ts_gender_sepsis['Gender'].map(gender_map)
ts_gender_sepsis['SepsisLabel'] = ts_gender_sepsis['SepsisLabel'].map(label_map)

# Create a subgraph with 1 row and 2 columns: The left side is non-sepsis, and the right side is sepsis
fig = make_subplots(rows=1, cols=2, specs=[[{'type':'domain'}, {'type':'domain'}]],
                    subplot_titles=["Non-Sepsis", "Sepsis"])

# non
data_0 = ts_gender_sepsis[ts_gender_sepsis['SepsisLabel'] == 'Non-Sepsis']
fig.add_trace(
    go.Pie(
        labels=data_0['Gender'],
        values=data_0['Count'],
        hole=0.4,
        marker=dict(colors=colors, line=dict(color='#000000', width=2)),
        textinfo='value+percent'
    ),
    row=1, col=1
)

# sepsis
data_1 = ts_gender_sepsis[ts_gender_sepsis['SepsisLabel'] == 'Sepsis']
fig.add_trace(
    go.Pie(
        labels=data_1['Gender'],
        values=data_1['Count'],
        hole=0.4,
        marker=dict(colors=colors, line=dict(color='#000000', width=2)),
        textinfo='value+percent'
    ),
    row=1, col=2
)

# show plot
fig.update_layout(title_text="Gender Distribution by Sepsis Label")
fig.show()

image.png

From the pie chart, the disease rate among men is relatively high, but it is not obvious. Still consider the variable of gender.

In [27]:
# Merge features and labels
grouped_stats = train_data.groupby('SepsisLabel').describe().T  

grouped_stats.head(30)
Out[27]:
SepsisLabel 0 1
HR count 739671.000000 14014.000000
mean 84.428369 90.824604
std 17.240487 18.847643
min 20.000000 37.500000
25% 72.000000 77.000000
50% 83.000000 90.000000
75% 95.000000 103.000000
max 201.000000 200.000000
O2Sat count 713077.000000 13624.000000
mean 97.194106 97.005065
std 2.939943 3.287885
min 20.000000 39.000000
25% 96.000000 95.500000
50% 98.000000 98.000000
75% 99.500000 100.000000
max 100.000000 100.000000
Temp count 275879.000000 5108.000000
mean 36.967067 37.248356
std 0.758486 1.052863
min 20.900000 30.900000
25% 36.500000 36.610000
50% 37.000000 37.300000
75% 37.450000 37.950000
max 42.220000 40.750000
SBP count 701476.000000 12936.000000
mean 123.870400 121.281424
std 23.252196 25.081367
min 20.000000 45.000000
25% 107.000000 103.000000
50% 121.000000 118.375000
In [ ]:
 
In [18]:
def missing_values_table(df):
    mis_val = X_train.isnull().sum() # Calculate the total number of missing values for each column
    mis_val_percent = 100 * X_train.isnull().sum() / len(df) # Calculate the percentage of missing values for each column
    mis_val_table = pd.concat([mis_val, mis_val_percent], axis = 1) # Combine the counts and percentages into a single DataFrame
    mis_val_table_ren_columns = mis_val_table.rename(columns = {0:'Missing Values',
                                                               1:'% of Total Values'})
    mis_val_table_ren_columns = mis_val_table_ren_columns[
        mis_val_table_ren_columns.iloc[:,1] != 0].sort_values('% of Total Values',ascending=False).round(1)
    # Filter out columns with no missing values
    
    print('Your selected dataframe has {} columns.\nThere are {} columns that have missing values.'.format(df.shape[1], mis_val_table_ren_columns.shape[0]))
    # Print the imformation of missing value

    print("\nColumns with missing values:")
    print(list(mis_val_table_ren_columns.index))
    return mis_val_table_ren_columns

missing_values_table(X_train)
Your selected dataframe has 39 columns.
There are 35 columns that have missing values.

Columns with missing values:
['Bilirubin_direct', 'Fibrinogen', 'TroponinI', 'Bilirubin_total', 'Alkalinephos', 'AST', 'Lactate', 'PTT', 'SaO2', 'EtCO2', 'Phosphate', 'HCO3', 'Chloride', 'BaseExcess', 'PaCO2', 'Calcium', 'Platelets', 'Creatinine', 'Magnesium', 'WBC', 'BUN', 'pH', 'Hgb', 'FiO2', 'Hct', 'Potassium', 'Glucose', 'Temp', 'DBP', 'Resp', 'SBP', 'O2Sat', 'MAP', 'HR', 'HospAdmTime']
Out[18]:
Missing Values % of Total Values
Bilirubin_direct 834552 99.8
Fibrinogen 830803 99.4
TroponinI 828305 99.1
Bilirubin_total 823793 98.5
Alkalinephos 822778 98.4
AST 822647 98.4
Lactate 814174 97.4
PTT 811513 97.0
SaO2 807520 96.6
EtCO2 806908 96.5
Phosphate 802586 96.0
HCO3 801033 95.8
Chloride 798211 95.5
BaseExcess 790853 94.6
PaCO2 790259 94.5
Calcium 787196 94.1
Platelets 786705 94.1
Creatinine 785289 93.9
Magnesium 783242 93.7
WBC 782515 93.6
BUN 778690 93.1
pH 778487 93.1
Hgb 774501 92.6
FiO2 766319 91.6
Hct 762094 91.1
Potassium 758747 90.7
Glucose 693917 83.0
Temp 555237 66.4
DBP 260735 31.2
Resp 128939 15.4
SBP 121812 14.6
O2Sat 109523 13.1
MAP 104153 12.5
HR 82539 9.9
HospAdmTime 8 0.0
In [19]:
#import math
#columns_to_plot = X_train.columns[1:] # I didn't choose ID
#excluded1 = ['Gender', 'Unit1', 'Unit2']
#columns_to_plot  = [col for col in columns_to_plot if col not in excluded1]
#
#num_cols = 3  # Number of plots per row
#num_rows = math.ceil(len(columns_to_plot) / num_cols)  # Calculate required rows
#
#fig, axes = plt.subplots(num_rows, num_cols, figsize=(15, 5 * num_rows))
#axes = axes.flatten()
#
#for i, col in enumerate(columns_to_plot):
#    sns.boxplot(x=y_train, y=col, data=X_train, ax=axes[i], showmeans=True, meanline=True, 
#                notch=True, flierprops=dict(marker='*', color='red', markersize=6), 
#                palette={'0': '#EFD496', '1': '#9BB89C'}) 
#    axes[i].set_title(f"Boxplot of {col}")
#    axes[i].set_xlabel("SepsisLabel")
#    axes[i].set_ylabel(col)
#
## Remove any empty subplots
#for j in range(i + 1, len(axes)):
#    fig.delaxes(axes[j])
#
#plt.tight_layout()
#plt.show()
No description has been provided for this image
In [20]:
#columns_to_plot = X_train.columns
#plot_data = X_train.loc[:,columns_to_plot]
#plot_data.plot(kind='box', subplots=True, layout=(7,6), sharex=False, sharey=False, figsize=(20, 20))
#plt.tight_layout(pad=0.5, w_pad=0.6, h_pad=2.0)
#plt.suptitle("The Distributions of the Values Across Variables", fontsize=16, fontweight='bold', y=1.05)
#plt.tight_layout(rect=[0, 0, 1, 1.05])  # 'rect' Reserve top space to avoid suptitle overlap
#plt.show()
No description has been provided for this image

Dataset Overview¶

The dataset contains clinical time-series data from 27,186 unique patients, with repeated measurements across various physiological and laboratory parameters.

Target Variable: SepsisLabel¶

The SepsisLabel indicates whether a patient was diagnosed with sepsis at a given time point:

  • 0 → Non-sepsis observations: 1,029,645
  • 1 → Sepsis-positive observations: 18,930

This implies that sepsis cases are highly imbalanced, making it an important consideration for model development and evaluation.

Demographic & Feature Types¶

  • Categorical variable:

    • Gender — Indicates the biological sex of the patient.
  • Continuous variables:

    • All other variables (e.g., HR, O2Sat, FiO2, MAP, etc.) represent continuous measurements of vital signs or laboratory results reThe dataset contains a large number of variables, many of which are redundant. To simplify the modeling process and reduce multicollinearity, I performed an initial feature selection to remove unnecessary features.
  • 'SBP' — Replaced by MAP (Mean Arterial Pressure), which is used instead of both SBP and DBP.

  • 'DBP' — Also replaced by MAP, as it provides a more stable and comprehensive representation of blood pressure.in subsequent steps.

Outlier Detection from Boxplots¶

Boxplot visualizations revealed several physiologically implausible or erroneous values in the dataset:

  • FiO2:
    The FiO2 variable contains an extreme outlier with a value of 10, which is clinically invalid. FiO2 represents the fraction of inspired oxygen, and its normal range is between 0.21 and 1.0. A value of 10 indicates a clear data entry error and needs correction.

  • BaseExcess:
    BaseExcess is used to assess the acid-base balance in the body. The normal clinical range is approximately -2 to +2 mmol/L, and values beyond ±5–10 mmol/L indicate severe metabolic disturbances.
    A value as high as 100 is not physiologically plausible, suggesting erroneous recording. Values beyond ±20 mmol/L are extremely rare and should be reviewed or c

  • HospAdmTime:
    This variable represents the hospital admission time. The boxplot shows a large number of negative values, some even less than -5000. Although such values may initially seem invalid, they are likely due to how timestamps were recorded or how time zero was defined in the dataset (e.g., relative to ICU admission time).
    Therefore, these negative values are not treated as erroneous, but should be interpreted with caution depending on the context in downstream analysi

  • AST:
    Based on the boxplot, extremely high AST values (e.g., > 5000 IU/L) that deviate significantly from the normal physiological range were observed. I chose to cap the maximum AST value at 5000 IU/L to prevent a small number of outliers from dominating the model training.

  • Age:

Since those over 90 years old are all counted as 100, this may have an impact on the data. The physical immunity of those over 90 can vary greatly. Therefore, in TASK2, I will exclude patients over 90 years old.

  • Calcium:

The calcium content shows outliers on the graph, but no particular abnormalities were found upon a rough inspection.s. s osing errors.

These anomalies must be handled either by setting them to NaN for imputation or by applying range-based clipping to ensure data integrity.

In [21]:
X_train[X_train["Calcium"] < 2.25].head().T
Out[21]:
45 46 50 53 57
Patient_id p109932 p109932 p109932 p109932 p109932
HR 87.0 88.0 111.0 85.5 94.0
O2Sat 96.0 95.0 92.0 95.0 95.0
Temp 34.6 35.5 37.1 37.0 NaN
SBP 110.0 92.5 161.0 93.0 NaN
MAP 84.0 71.0 115.0 71.0 NaN
DBP 68.0 58.5 90.0 58.0 NaN
Resp 18.0 19.0 24.0 18.0 NaN
EtCO2 NaN 34.0 42.0 NaN 34.0
BaseExcess NaN NaN NaN NaN NaN
HCO3 NaN NaN NaN NaN NaN
FiO2 1.0 0.8 0.7 0.5 0.5
pH 7.32 7.36 7.4 7.38 7.4
PaCO2 48.0 47.0 42.0 42.0 37.0
SaO2 NaN NaN NaN NaN NaN
AST NaN NaN NaN NaN NaN
BUN NaN NaN NaN NaN NaN
Alkalinephos NaN NaN NaN NaN NaN
Calcium 1.29 1.33 1.3 1.24 1.26
Chloride 109.0 109.0 110.0 109.0 109.0
Creatinine NaN NaN NaN NaN NaN
Bilirubin_direct NaN NaN NaN NaN NaN
Glucose 262.0 176.0 86.0 141.0 254.0
Lactate 4.3 3.28 2.15 1.6 1.33
Magnesium NaN NaN NaN NaN NaN
Phosphate NaN NaN NaN NaN NaN
Potassium 3.5 3.7 4.4 5.4 5.6
Bilirubin_total NaN NaN NaN NaN NaN
TroponinI NaN NaN NaN NaN NaN
Hct NaN NaN NaN NaN NaN
Hgb NaN NaN NaN NaN NaN
PTT NaN NaN NaN NaN 27.7
WBC NaN NaN NaN NaN NaN
Fibrinogen NaN NaN NaN NaN 207.0
Platelets NaN NaN NaN NaN NaN
Age 51.0 51.0 51.0 51.0 51.0
Gender 1 1 1 1 1
HospAdmTime -21.47 -21.47 -21.47 -21.47 -21.47
ICULOS 9 10 14 17 21

Missing Value Handling Strategy (Based on Missingness Levels)¶

  • < 15% Missing 'HR', 'MAP', 'O2Sat', 'Resp':
    Forward-fill followed by backward-fill (LOCF + ROCF) was applied per patient to preserve the temporal structure of the data without introducing leakage.

  • 15–90% Missing 'Bilirubin_direct', 'Fibrinogen', 'TroponinI', 'Bilirubin_total', 'Alkalinephos', 'AST', 'Lactate', 'PTT', 'SaO2', 'EtCO2', 'Phosphate', 'HCO3', 'Chloride', 'PaCO2', 'Calcium', 'Platelets', 'Creatinine', 'Magnesium', 'WBC', 'BUN', 'pH', 'Hgb', 'Hct', 'Potassium'

  • > 90% Missing:

They are too sparse to provide reliable signals.

In [22]:
# df_impute = df.copy()
In [23]:
#df_impute.drop(['DBP', 'SBP','BaseExcess','HCO3','PaCO2',
#                'Phosphate','PTT','Fibrinogen','pH','EtCO2','SaO2','HospAdmTime' ,'TroponinI'], axis=1, inplace=True)
In [24]:
# Treat implausible FiO2 values ​​(e.g., > 1.0) as missing
# df_impute.loc[df['FiO2'] > 1.0, 'FiO2'] = np.nan
# Fill by patient using LOCF (Forward Fill)
# df_impute['FiO2'] = df_impute.groupby('Patient_id')['FiO2'].ffill()
In [25]:
# Treat implausible FiO2 values ​​(e.g., > 1.0) as missing
# df_impute.loc[df['BaseExcess'] > 20, 'BaseExcess'] = np.nan
# Fill by patient using LOCF (Forward Fill)
# df_impute['BaseExcess'] = df_impute.groupby('Patient_id')['BaseExcess'].ffill()
In [20]:
# List of variables with implausible high values and their thresholds
clip_vars = {'FiO2': 1.0, 'BaseExcess': 20}

# Set implausible values to NaN
for var, max_val in clip_vars.items():
    X_train.loc[X_train[var] > max_val, var] = np.nan

# Patient-wise forward fill (LOCF) for selected variables
for var in clip_vars:
    X_train[var] = X_train.groupby('Patient_id')[var].transform(lambda x: x.ffill().bfill())
In [21]:
# X_train.isnull().sum()
In [22]:
X_train['AST'] = np.where(X_train['AST'] > 5000, 5000, X_train['AST'])
In [23]:
cols_20 = ['HR', 'MAP', 'O2Sat', 'Resp','SBP','BaseExcess','FiO2']
In [24]:
# df_impute = df_impute.reset_index(drop=True) # Remove extra tags
In [25]:
# X_train1, X_test1, y_train1, y_test1 = train_test_split(df_impute.drop("SepsisLabel", axis=1), df_impute["SepsisLabel"], test_size=0.2, random_state=42)
In [26]:
# df_impute
In [27]:
for col in cols_20:
    X_train[col] = X_train.groupby('Patient_id')[col].transform(lambda x: x.ffill().bfill()) 
# The data with missing values ​​less than 20% are filled by LOCF and ROCF methods.
In [28]:
# X_train.isnull().sum()
# The data in "col_20" still have the missing value
In [29]:
for col in cols_20:
    X_test[col] = X_test.groupby('Patient_id')[col].transform(lambda x: x.ffill().bfill())
In [30]:
#dff = df_impute.copy()
In [37]:
#dff
In [38]:
def missing_values_table(X_train):
    mis_val = X_train.isnull().sum() # Calculate the total number of missing values for each column
    mis_val_percent = 100 * X_train.isnull().sum() / len(X_train) # Calculate the percentage of missing values for each column
    mis_val_table = pd.concat([mis_val, mis_val_percent], axis = 1) # Combine the counts and percentages into a single DataFrame
    mis_val_table_ren_columns = mis_val_table.rename(columns = {0:'Missing Values',
                                                               1:'% of Total Values'})
    mis_val_table_ren_columns = mis_val_table_ren_columns[
        mis_val_table_ren_columns.iloc[:,1] != 0].sort_values('% of Total Values',ascending=False).round(1)
    # Filter out columns with no missing values
    
    print('Your selected dataframe has {} columns.\nThere are {} columns that have missing values.'.format(df.shape[1], mis_val_table_ren_columns.shape[0]))
    # Print the imformation of missing value

    print("\nColumns with missing values:")
    print(list(mis_val_table_ren_columns.index))
    return mis_val_table_ren_columns

missing_values_table(X_train)
Your selected dataframe has 40 columns.
There are 35 columns that have missing values.

Columns with missing values:
['Bilirubin_direct', 'Fibrinogen', 'TroponinI', 'Bilirubin_total', 'Alkalinephos', 'AST', 'Lactate', 'PTT', 'SaO2', 'EtCO2', 'Phosphate', 'HCO3', 'Chloride', 'PaCO2', 'Calcium', 'Platelets', 'Creatinine', 'Magnesium', 'WBC', 'BUN', 'pH', 'Hgb', 'Hct', 'Potassium', 'Glucose', 'Temp', 'BaseExcess', 'FiO2', 'DBP', 'SBP', 'MAP', 'Resp', 'O2Sat', 'HR', 'HospAdmTime']
Out[38]:
Missing Values % of Total Values
Bilirubin_direct 834552 99.8
Fibrinogen 830803 99.4
TroponinI 828305 99.1
Bilirubin_total 823793 98.5
Alkalinephos 822778 98.4
AST 822647 98.4
Lactate 814174 97.4
PTT 811513 97.0
SaO2 807520 96.6
EtCO2 806908 96.5
Phosphate 802586 96.0
HCO3 801033 95.8
Chloride 798211 95.5
PaCO2 790259 94.5
Calcium 787196 94.1
Platelets 786705 94.1
Creatinine 785289 93.9
Magnesium 783242 93.7
WBC 782515 93.6
BUN 778690 93.1
pH 778487 93.1
Hgb 774501 92.6
Hct 762094 91.1
Potassium 758747 90.7
Glucose 693917 83.0
Temp 555237 66.4
BaseExcess 533215 63.8
FiO2 419704 50.2
DBP 260735 31.2
SBP 5745 0.7
MAP 1260 0.2
Resp 1106 0.1
O2Sat 205 0.0
HR 50 0.0
HospAdmTime 8 0.0
In [39]:
import missingno as msno
import matplotlib.pyplot as plt
msno.heatmap(X_train, figsize=(16,8))
plt.title("Missing Value Correlation Heatmap")
plt.show()
No description has been provided for this image

Some redundant indicators have been removed, but the missing values of some key data are still too high.
( Direct deletion may affect the subsequent modeling effect. Therefore, I choose to take the median for sparse variables (>15% missing), and keep all missing variables as Na.) I think I was wrong here before, so I made adjustments later. If I fill in the missing values at the beginning, it makes no sense to do the difference later.N

In [420]:
# X_train.columns
In [460]:
#num_features = ['Bilirubin_direct',
#    'Fibrinogen',
#    'TroponinI',
#    'Bilirubin_total',
#    'Alkalinephos',
#    'AST',
#    'Lactate',
#    'PTT',
#    'SaO2',
#    'EtCO2',
#    'Phosphate',
#    'HCO3',
#    'Chloride',
#    'BaseExcess','PaCO2','Calcium','Platelets','Creatinine','Magnesium','WBC','BUN','pH','Hgb','FiO2','Hct','Potassium','Glucose','Temp','DBP']
#
#X_num = X_train[num_features]  
#imputer1 = IterativeImputer(estimator=BayesianRidge(), max_iter=10, random_state=42)
#
#X_num_imputed = imputer1.fit_transform(X_num)
#
#X_num_imputed = pd.DataFrame(X_num_imputed, columns=num_features)
In [39]:
#cols_morethan20 = [
#'Bilirubin_direct',
#    'Fibrinogen',
#    'TroponinI',
#    'Bilirubin_total',
#    'Alkalinephos',
#    'AST',
#    'Lactate',
#    'PTT',
#    'SaO2',
#    'EtCO2',
#    'Phosphate',
#    'HCO3',
#    'Chloride',
#    'BaseExcess','PaCO2','Calcium','Platelets','Creatinine','Magnesium','WBC','BUN','pH','Hgb','FiO2','Hct','Potassium','Glucose','Temp','DBP'
#]
#imputer = SimpleImputer(strategy="median")
#X_median_imputed = imputer.fit_transform(X_train[cols_morethan20])
#X_median_imputed = pd.DataFrame(X_median_imputed, columns=cols_morethan20 )
In [43]:
def compare_stats(X1, X2, col):
    def iqr(x): return x.quantile(0.75) - x.quantile(0.25)
    print(f"\n📊 {col} compare")
    print(f"IQR:     Median = {iqr(X1[col]):.2f}, MICE = {iqr(X2[col]):.2f}")
    print(f"Mean:    Median = {X1[col].mean():.2f}, MICE = {X2[col].mean():.2f}")
    print(f"Std:     Median = {X1[col].std():.2f}, MICE = {X2[col].std():.2f}")
    print(f"Skewness:Median = {X1[col].skew():.2f}, MICE = {X2[col].skew():.2f}")
In [145]:
#for col in cols_to_compare:
#    compare_stats(X_median_imputed, X_num_imputed,cols_morethan20)
In [396]:
#compare_impute_distribution_batch(X_median_imputed, X_num_imputed, cols_morethan20)
In [431]:
#def compare_boxplots_grid(X_median, X_mice, columns, n_cols=3):
#    n_rows = -(-len(columns) // n_cols)  
#    fig, axes = plt.subplots(n_rows, n_cols, figsize=(n_cols * 5, n_rows * 4))
#    axes = axes.flatten()
#
#    for i, col in enumerate(columns):

#        df_plot = pd.DataFrame({
#            col: pd.concat([X_median[col], X_mice[col]], ignore_index=True),
#            "Imputation": ['Median'] * len(X_median) + ['MICE'] * len(X_mice)
#        })
#
#        sns.boxplot(ax=axes[i], x="Imputation", y=col, data=df_plot,
#                    palette=["#F4A261", "#2A9D8F"])
#        axes[i].set_title(f"{col}", fontsize=12)
#        axes[i].grid(True, linestyle="--", alpha=0.3)
#
#
#    for j in range(i + 1, len(axes)):
#        fig.delaxes(axes[j])
#
#    plt.tight_layout()
#    plt.suptitle("Boxplot: Median vs MICE Imputation", fontsize=16, y=1.02)
#    plt.show()
#
In [55]:
# compare_boxplots_grid(X_median_imputed, X_num_imputed, cols_morethan20)
No description has been provided for this image

The result of using median imputation initially was not good. I subsequently experimented with the more sophisticated MICE method, especially considering that some variables exhibited strong pairwise missing-value correlations. My initial plan was to apply MICE to variables with high missing-value correlation and retain median imputation for the rest.

However, I found that many variables with more than 20% missing data actually had over 90% of their values missing. In such cases, even using a powerful method like MICE may lead to poor generalizability and compromise interpretability, as the imputed values would be based on minimal observed information.

Although median imputation carries a risk of bias and oversimplification, I plan to perform feature selection in the next step, which may mitigate this issue. Therefore, for variables with more than 20% missingness, I chose to retain median imputation for the sake of simplicity, interpretability, and robustness.

In [461]:
#cols_morethan20_add = [
#'Bilirubin_direct',
#    'Fibrinogen',
#    'TroponinI',
#    'Bilirubin_total',
#    'Alkalinephos',
#    'AST',
#    'Lactate',
#    'PTT',
#    'SaO2',
#    'EtCO2',
#    'Phosphate',
#    'HCO3',
#    'Chloride',
#    'BaseExcess','PaCO2','Calcium','Platelets','Creatinine','Magnesium','WBC','BUN','pH','Hgb','FiO2','Hct','Potassium','Glucose','Temp','DBP',
#    'HR', 'MAP', 'O2Sat', 'Resp','SBP','BaseExcess','FiO2'
#]
#imputer = SimpleImputer(strategy="median")
#X_train[cols_morethan20_add] = imputer1.fit_transform(X_train[cols_morethan20_add])

Since some data are missing both before and after when performing locf and ROCF, the median needs to be used for the second filling.

In [462]:
#X_test[cols_morethan20_add] = imputer1.transform(X_test[cols_morethan20_add])
In [42]:
## Summary statistics
## col_summary = df_impute.columns[1:-1]
## df_impute.groupby('SepsisLabel')[col_summary].describe()
In [463]:
# X_train
Out[463]:
Patient_id HR O2Sat Temp SBP MAP DBP Resp EtCO2 BaseExcess ... Hct Hgb PTT WBC Fibrinogen Platelets Age Gender HospAdmTime ICULOS
0 p116812 102.0 100.0 37.00 99.0 84.0 62.0 22.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 59.00 1 -6.01 1
1 p116812 102.0 100.0 37.00 99.0 84.0 62.0 22.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 59.00 1 -6.01 2
2 p116812 102.0 100.0 37.00 99.0 84.0 76.0 18.5 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 59.00 1 -6.01 3
3 p116812 124.0 100.0 37.00 97.0 70.0 55.0 16.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 59.00 1 -6.01 4
4 p116812 98.0 100.0 37.00 95.0 73.0 62.0 18.0 33.0 0.0 ... 23.1 7.5 32.4 6.8 255.0 276.0 59.00 1 -6.01 5
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1048570 p016300 89.0 100.0 36.44 97.0 67.0 54.0 24.0 33.0 -1.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 68.62 1 -115.55 6
1048571 p016300 92.0 100.0 37.00 130.0 86.0 62.0 21.0 33.0 -1.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 68.62 1 -115.55 7
1048572 p016300 94.0 100.0 37.00 105.0 74.0 59.0 17.0 33.0 -1.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 68.62 1 -115.55 8
1048573 p016300 95.0 100.0 36.89 89.0 65.0 53.0 20.0 33.0 -1.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 68.62 1 -115.55 9
1048574 p016300 92.0 100.0 37.00 112.0 77.0 60.0 17.0 33.0 -1.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 68.62 1 -115.55 10

836224 rows × 39 columns

In [395]:
#zero_ratio = (X_train == 0).sum() / len(X_train)
#zero_ratio = zero_ratio.sort_values(ascending=False)
#print(zero_ratio)
In [44]:
X_test
Out[44]:
Patient_id HR O2Sat Temp SBP MAP DBP Resp EtCO2 BaseExcess ... Hct Hgb PTT WBC Fibrinogen Platelets Age Gender HospAdmTime ICULOS
140 p000902 78.0 100.0 37.00 90.0 79.00 62.0 13.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 65.55 1 -0.02 1
141 p000902 78.0 100.0 37.00 90.0 79.00 61.0 13.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 65.55 1 -0.02 2
142 p000902 80.0 100.0 37.00 82.0 72.00 56.0 19.0 33.0 0.0 ... 30.3 14.0 32.4 10.3 255.0 181.0 65.55 1 -0.02 3
143 p000902 84.0 100.0 37.00 94.0 78.00 64.0 20.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 65.55 1 -0.02 4
144 p000902 83.0 99.0 36.11 94.0 87.00 66.0 16.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 65.55 1 -0.02 5
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1048521 p009220 91.0 98.0 37.00 99.0 77.67 62.0 25.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 47.92 1 -0.04 15
1048522 p009220 86.0 95.0 37.00 107.0 73.00 62.0 17.0 33.0 0.0 ... 30.3 10.3 50.3 10.3 255.0 181.0 47.92 1 -0.04 16
1048523 p009220 94.0 96.0 37.00 117.0 75.00 62.0 20.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 47.92 1 -0.04 17
1048524 p009220 97.0 98.0 37.00 114.0 77.33 62.0 23.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 47.92 1 -0.04 18
1048525 p009220 90.0 96.0 37.00 119.0 81.67 62.0 18.0 33.0 0.0 ... 30.3 10.3 32.4 10.3 255.0 181.0 47.92 1 -0.04 19

212351 rows × 39 columns

TASK 2¶

In [32]:
def extract_delta_features(data):
    # Sort by patient and ICU stay duration
    data_sorted = data.sort_values(by=["Patient_id", "ICULOS"])

    # Variables to calculate Δm
    delta_cols = data.columns.difference(['Patient_id', 'Gender', 'SepsisLabel', 'ICULOS', 'Age'])

    # Δm = last - first, based only on real (non-missing) values
    def robust_delta(series):
        non_missing = series.dropna()
        if len(non_missing) >= 2:
            return non_missing.iloc[-1] - non_missing.iloc[0]
        else:
            return np.nan  # if I can't calculate a delta, return NaN

    delta_data = data_sorted.groupby('Patient_id')[delta_cols].agg(robust_delta)

    # Get the final label for each patient
    label_data = data_sorted.groupby('Patient_id')['SepsisLabel'].agg(lambda x: x.iloc[-1])

    # Get first-recorded Age and Gender for each patient
    demo_data = data_sorted.groupby('Patient_id')[['Age', 'Gender']].agg(lambda x: x.iloc[0])

    # Merge everything
    result = delta_data.copy()
    result['SepsisLabel'] = label_data
    result[['Age', 'Gender']] = demo_data

    return result.reset_index()
In [33]:
#y_train
In [34]:
# X_temp
In [35]:
X_train.shape
Out[35]:
(836224, 39)
In [36]:
# For the convenience of traversal, I'll talk about data merging first
X_temp = X_train.copy()
X_temp['SepsisLabel'] = y_train.values
X_testall =X_test.copy()
X_testall['SepsisLabel'] = y_test.values
In [37]:
X_train_agg = extract_delta_features(X_temp)
X_test_agg = extract_delta_features(X_testall)

Tips¶

I divided the training set and the test set very early. In fact, during the aggregation process, there were problems such as code redundancy and cumbersome steps. However, I think this is also a way to prevent data leakage. Therefore, I no longer handle the merging process and insist on dividing the training set and the test set.

In [38]:
#abs Separate features and labels
y_train_agg = X_train_agg.pop('SepsisLabel')
y_test_agg = X_test_agg.pop('SepsisLabel')
In [39]:
X_train_agg['Patient_id'].nunique()
Out[39]:
21748
In [40]:
print("X_train shape:", X_train_agg.shape)
print("X_test shape:", X_test_agg.shape)
X_train shape: (21748, 38)
X_test shape: (5438, 38)
In [41]:
print("X_train shape:", y_train_agg.shape)
print("X_test shape:", y_test_agg.shape)
X_train shape: (21748,)
X_test shape: (5438,)
In [42]:
X_train_agg
Out[42]:
Patient_id AST Alkalinephos BUN BaseExcess Bilirubin_direct Bilirubin_total Calcium Chloride Creatinine ... Potassium Resp SBP SaO2 Temp TroponinI WBC pH Age Gender
0 p000001 NaN NaN 8.0 -1.0 NaN NaN 0.3 0.0 0.00 ... 0.8 -1.0 -20.0 1.0 0.22 NaN 9.0 -0.04 83.14 0
1 p000002 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN -6.5 -10.0 NaN -0.33 NaN NaN NaN 75.91 0
2 p000003 NaN NaN -6.0 -3.0 NaN NaN -0.1 1.0 -0.10 ... 0.3 -3.0 7.0 NaN 0.00 NaN -1.3 -0.02 45.82 0
3 p000007 NaN NaN -16.0 3.0 NaN NaN 2.1 10.0 -0.30 ... -1.7 -16.0 -39.5 NaN 0.11 NaN -0.5 0.11 64.24 1
4 p000008 NaN NaN 3.0 2.0 NaN NaN 0.8 5.0 -0.10 ... 1.2 -1.5 36.0 NaN 0.11 NaN -2.0 0.09 87.08 1
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
21743 p119995 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN -3.0 10.0 NaN -0.20 NaN NaN NaN 76.00 1
21744 p119996 NaN NaN 4.0 NaN NaN NaN 0.1 NaN 0.17 ... 0.7 -2.0 -27.0 NaN 0.00 0.0 NaN NaN 84.00 0
21745 p119997 NaN NaN -1.0 NaN NaN NaN 8.0 NaN -0.16 ... 0.5 6.0 -7.0 NaN 1.00 NaN -0.8 NaN 30.00 1
21746 p119998 NaN NaN 12.0 NaN NaN NaN 0.5 NaN 2.17 ... 0.1 3.0 -15.0 NaN 0.30 NaN -2.9 NaN 60.00 0
21747 p119999 1.0 -1.0 -1.0 NaN NaN 0.2 -0.1 NaN 0.06 ... 0.0 -1.0 23.0 NaN 0.40 NaN -1.4 NaN 84.00 0

21748 rows × 38 columns

In [43]:
#X_train_agg.loc[X_train_agg['Patient_id'] == 'p000001', 'HR']
In [44]:
X_train_agg.columns
Out[44]:
Index(['Patient_id', 'AST', 'Alkalinephos', 'BUN', 'BaseExcess',
       'Bilirubin_direct', 'Bilirubin_total', 'Calcium', 'Chloride',
       'Creatinine', 'DBP', 'EtCO2', 'FiO2', 'Fibrinogen', 'Glucose', 'HCO3',
       'HR', 'Hct', 'Hgb', 'HospAdmTime', 'Lactate', 'MAP', 'Magnesium',
       'O2Sat', 'PTT', 'PaCO2', 'Phosphate', 'Platelets', 'Potassium', 'Resp',
       'SBP', 'SaO2', 'Temp', 'TroponinI', 'WBC', 'pH', 'Age', 'Gender'],
      dtype='object')
In [45]:
X_train_agg = X_train_agg.drop(columns=['HospAdmTime'])
In [46]:
X_test_agg = X_test_agg.drop(columns=['HospAdmTime'])
In [47]:
X_train
Out[47]:
Patient_id HR O2Sat Temp SBP MAP DBP Resp EtCO2 BaseExcess ... Hct Hgb PTT WBC Fibrinogen Platelets Age Gender HospAdmTime ICULOS
0 p116812 102.0 100.0 NaN 99.0 84.0 NaN 22.0 NaN NaN ... NaN NaN NaN NaN NaN NaN 59.00 1 -6.01 1
1 p116812 102.0 100.0 NaN 99.0 84.0 NaN 22.0 NaN NaN ... NaN NaN NaN NaN NaN NaN 59.00 1 -6.01 2
2 p116812 102.0 100.0 NaN 99.0 84.0 76.0 18.5 NaN NaN ... NaN NaN NaN NaN NaN NaN 59.00 1 -6.01 3
3 p116812 124.0 100.0 NaN 97.0 70.0 55.0 16.0 NaN NaN ... NaN NaN NaN NaN NaN NaN 59.00 1 -6.01 4
4 p116812 98.0 100.0 NaN 95.0 73.0 62.0 18.0 NaN NaN ... 23.1 7.5 NaN 6.8 NaN 276.0 59.00 1 -6.01 5
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1048570 p016300 89.0 100.0 36.44 97.0 67.0 54.0 24.0 NaN -1.0 ... NaN NaN NaN NaN NaN NaN 68.62 1 -115.55 6
1048571 p016300 92.0 100.0 NaN 130.0 86.0 62.0 21.0 NaN -1.0 ... NaN NaN NaN NaN NaN NaN 68.62 1 -115.55 7
1048572 p016300 94.0 100.0 NaN 105.0 74.0 59.0 17.0 NaN -1.0 ... NaN NaN NaN NaN NaN NaN 68.62 1 -115.55 8
1048573 p016300 95.0 100.0 36.89 89.0 65.0 53.0 20.0 NaN -1.0 ... NaN NaN NaN NaN NaN NaN 68.62 1 -115.55 9
1048574 p016300 92.0 100.0 NaN 112.0 77.0 60.0 17.0 NaN -1.0 ... NaN NaN NaN NaN NaN NaN 68.62 1 -115.55 10

836224 rows × 39 columns

In [48]:
# X_train_agg
In [49]:
#The data compression is done. Deal with the issue of age mentioned earlier
y_train_agg = y_train_agg.loc[X_train_agg["Age"] <= 90]
X_train_agg = X_train_agg.loc[X_train_agg["Age"] <= 90]

y_test_agg = y_test_agg.loc[X_test_agg["Age"] <= 90]
X_test_agg = X_test_agg.loc[X_test_agg["Age"] <= 90]
In [50]:
excluded_cols = ['Gender', 'Patient_id']
corr_columns = [col for col in X_train_agg.columns if col not in excluded_cols]
corr_train = X_train_agg[corr_columns].corr()
In [51]:
#X_train_agg.describe()
In [52]:
mask = np.zeros_like(corr_train)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(26,22))
sns.heatmap(corr_train, mask=mask, square=True, annot=True, fmt=".2f", center=0, linewidths=.5, cmap="YlGn") 
Out[52]:
<Axes: >
No description has been provided for this image
In [53]:
filtered_corr = corr_train.where(np.abs(corr_train) > 0.5)
# plot
plt.figure(figsize=(26, 22))
sns.heatmap(filtered_corr, mask=mask, square=True, annot=True, fmt=".2f",
            center=0, linewidths=.5, cmap="YlGn", cbar=True)
plt.title("Correlation Coefficients > 0.5", fontsize=16)
plt.show()
No description has been provided for this image
In [54]:
# The variables of interest obtained based on the above figure
vars_of_interest = ["MAP", "Hgb", "Creatinine","HCO3","BaseExcess"]  

# Traverse and print each variable and its top three most relevant variables
for var in vars_of_interest:
    print(f"\nTop correlations with {var}:")
    top_corr = corr_train[var].sort_values(ascending=False)[:3]
    print(top_corr)
Top correlations with MAP:
MAP    1.000000
SBP    0.754124
DBP    0.717974
Name: MAP, dtype: float64

Top correlations with Hgb:
Hgb          1.000000
Hct          0.882848
Platelets    0.202106
Name: Hgb, dtype: float64

Top correlations with Creatinine:
Creatinine    1.000000
BUN           0.625354
Phosphate     0.261133
Name: Creatinine, dtype: float64

Top correlations with HCO3:
HCO3                1.000000
BaseExcess          0.504988
Bilirubin_direct    0.258863
Name: HCO3, dtype: float64

Top correlations with BaseExcess:
BaseExcess    1.000000
pH            0.652697
HCO3          0.504988
Name: BaseExcess, dtype: float64
In [55]:
# corr_train["Hgb"].sort_values(ascending=False)[:3]
In [56]:
X_train_agg.drop(['DBP', 'SBP','Hct'], axis=1, inplace=True)
# Remove the redundant variable
# MAP is calculated based on SBP and DBP. Keep MAP
# Meanwhile, the correlation coefficients among the three of them are relatively high
# When renal function declines, Creatinine and BUN tend to increase together. It does not constitute redundancy and does not need to be deleted.
# It is hoped that the model has strong interpretability and few variables, and only Hgb (Hemoglobin) is retained.
In [57]:
X_test_agg.drop(['DBP', 'SBP','Hct'], axis=1, inplace=True)
In [58]:
X_train_agg
Out[58]:
Patient_id AST Alkalinephos BUN BaseExcess Bilirubin_direct Bilirubin_total Calcium Chloride Creatinine ... Platelets Potassium Resp SaO2 Temp TroponinI WBC pH Age Gender
0 p000001 NaN NaN 8.0 -1.0 NaN NaN 0.3 0.0 0.00 ... 21.0 0.8 -1.0 1.0 0.22 NaN 9.0 -0.04 83.14 0
1 p000002 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN -6.5 NaN -0.33 NaN NaN NaN 75.91 0
2 p000003 NaN NaN -6.0 -3.0 NaN NaN -0.1 1.0 -0.10 ... -2.0 0.3 -3.0 NaN 0.00 NaN -1.3 -0.02 45.82 0
3 p000007 NaN NaN -16.0 3.0 NaN NaN 2.1 10.0 -0.30 ... 17.0 -1.7 -16.0 NaN 0.11 NaN -0.5 0.11 64.24 1
4 p000008 NaN NaN 3.0 2.0 NaN NaN 0.8 5.0 -0.10 ... -152.0 1.2 -1.5 NaN 0.11 NaN -2.0 0.09 87.08 1
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
21743 p119995 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN -3.0 NaN -0.20 NaN NaN NaN 76.00 1
21744 p119996 NaN NaN 4.0 NaN NaN NaN 0.1 NaN 0.17 ... NaN 0.7 -2.0 NaN 0.00 0.0 NaN NaN 84.00 0
21745 p119997 NaN NaN -1.0 NaN NaN NaN 8.0 NaN -0.16 ... -24.0 0.5 6.0 NaN 1.00 NaN -0.8 NaN 30.00 1
21746 p119998 NaN NaN 12.0 NaN NaN NaN 0.5 NaN 2.17 ... -46.0 0.1 3.0 NaN 0.30 NaN -2.9 NaN 60.00 0
21747 p119999 1.0 -1.0 -1.0 NaN NaN 0.2 -0.1 NaN 0.06 ... 17.0 0.0 -1.0 NaN 0.40 NaN -1.4 NaN 84.00 0

21524 rows × 34 columns

In [421]:
#zero_ratio = (X_train_agg == 0).sum() / len(X_train_agg)
#zero_ratio = zero_ratio.sort_values(ascending=False)
#print(zero_ratio)
In [60]:
# X_temp1
In [61]:
X_temp1 = X_train_agg.copy()
X_temp1['SepsisLabel'] = y_train_agg.values
temp_cols = ['MAP','HR','Resp','O2Sat']
#imputer = SimpleImputer(strategy="median")
#X_temp1[temp_cols] = imputer.fit_transform(X_temp1[temp_cols])
In [62]:
#X_train_agg.isnull().sum()
In [63]:
#X_temp1.isnull().sum()
In [76]:
correlations = {}

for col in X_temp1.columns:
    if col in ['SepsisLabel']:
        continue
    if not np.issubdtype(X_temp1[col].dtype, np.number):
        continue  # Skip the non-numeric columns
    if X_temp1[col].nunique() > 1:
        subset = X_temp1[['SepsisLabel', col]].dropna()
        if len(subset) == 0:
            continue
        try:
            r, p = pointbiserialr(subset['SepsisLabel'], subset[col])
            correlations[col] = (r, p)
        except Exception as e:
            print(f"Skipped {col} due to error: {e}")
# Screen out the variables with a p-value > 0.05
insignificant = {var: (r, p) for var, (r, p) in correlations.items() if p > 0.05}

# Sort by absolute correlation coefficient (from smallest to largest))
sorted_insignificant = sorted(insignificant.items(), key=lambda x: abs(x[1][0]))

print(f"{'Feature':<25} {'r':>8} {'p-value':>12}")
print("-" * 45)
for var, (r, p) in sorted_insignificant:
    print(f"{var:<25} {r:>8.3f} {p:>12.3e}")
Feature                          r      p-value
---------------------------------------------
Resp                        -0.000    9.788e-01
SaO2                         0.003    8.078e-01
Lactate                     -0.004    7.908e-01
Potassium                    0.006    4.752e-01
AST                          0.006    7.377e-01
Glucose                      0.007    3.486e-01
TroponinI                   -0.008    7.132e-01
Phosphate                   -0.009    3.919e-01
Gender                       0.011    1.074e-01
PTT                          0.014    2.949e-01
PaCO2                       -0.018    1.202e-01
HCO3                         0.019    7.560e-02
EtCO2                       -0.026    3.026e-01
Fibrinogen                   0.038    2.045e-01
In [77]:
#correlations = {}
#
## Traverse all continuous variables (excluding non-numeric or irrelevant columns)
#for col in X_temp1.columns:
#    if col in ['Patient_id','SepsisLabel', 'Gender']:  # Exclude these
#        continue
#    # It is necessary to ensure that the column is not empty or a constant sequence
#    if X_temp1[col].nunique() > 1:
#        r, p = pointbiserialr(X_temp1['SepsisLabel'], X_temp1[col])
#        correlations[col] = (r, p)
#
## Sort the results and display the top few items
#sorted_corr = sorted(correlations.items(), key=lambda x: abs(x[1][0]), reverse=True)
#
#for var, (r, p) in sorted_corr[:20]:
#    print(f"{var:<20}  Correlation: {r:.3f}   p-value: {p:.4f}")
In [497]:
# X_test_agg.isnull().sum()
In [78]:
#for var, (r, p) in sorted_corr[-15:]:
#    print(f"{var:<20}  Correlation: {r:.3f}   p-value: {p:.4f}")
In [79]:
X_train_agg.drop(['EtCO2','Potassium','Glucose','Phosphate','PTT','PaCO2','HCO3','Fibrinogen',
                 'SaO2','TroponinI','Lactate','AST','Resp'], axis=1, inplace=True)
In [80]:
#X_train_agg.shape
In [81]:
X_test_agg.drop(['EtCO2','Potassium','Glucose','Phosphate','PTT','PaCO2','HCO3','Fibrinogen',
                 'SaO2','TroponinI','Lactate','AST','Resp'], axis=1, inplace=True)
In [82]:
#X_test_agg.shape

Although XGBoost does not rely on a linear relationship between variables and the target variable, these variables had over 90% missing values. From the perspective of imputation error and potential noise, removing them helps improve model robustness, prevent overfitting, and reduce noise.

In [83]:
print(X_train_agg["Gender"].value_counts()) # Checking that is the variable "gender" 1 or 0
# I don't need to one-hot
Gender
1    12060
0     9464
Name: count, dtype: int64
In [84]:
X_train_agg.columns
Out[84]:
Index(['Patient_id', 'Alkalinephos', 'BUN', 'BaseExcess', 'Bilirubin_direct',
       'Bilirubin_total', 'Calcium', 'Chloride', 'Creatinine', 'FiO2', 'HR',
       'Hgb', 'MAP', 'Magnesium', 'O2Sat', 'Platelets', 'Temp', 'WBC', 'pH',
       'Age', 'Gender'],
      dtype='object')
In [85]:
'''
def determine_acid_base(pH, PaCO2, HCO3):
    if pH < 7.35:
        if PaCO2 > 45:
            return 'respiratory_acidosis'
        elif HCO3 > 30:
            return 'chronic_acidosis'
    elif pH > 7.45:
        if PaCO2 < 38:
            return 'respiratory_alkalosis'
        elif HCO3 > 28:
            return 'metabolic_alkalosis'
    return 'normal'
'''
Out[85]:
"\ndef determine_acid_base(pH, PaCO2, HCO3):\n    if pH < 7.35:\n        if PaCO2 > 45:\n            return 'respiratory_acidosis'\n        elif HCO3 > 30:\n            return 'chronic_acidosis'\n    elif pH > 7.45:\n        if PaCO2 < 38:\n            return 'respiratory_alkalosis'\n        elif HCO3 > 28:\n            return 'metabolic_alkalosis'\n    return 'normal'\n"
In [86]:
'''
def determine_acid_base(pH, PaCO2, HCO3):
    if pH < 7.35:
        if PaCO2 > 45:
            return 'respiratory_acidosis'
        elif HCO3 > 30:
            return 'chronic_acidosis'
    elif pH > 7.45:
        if PaCO2 < 38:
            return 'respiratory_alkalosis'
        elif HCO3 > 28:
            return 'metabolic_alkalosis'
    return 'normal'
'''
Out[86]:
"\ndef determine_acid_base(pH, PaCO2, HCO3):\n    if pH < 7.35:\n        if PaCO2 > 45:\n            return 'respiratory_acidosis'\n        elif HCO3 > 30:\n            return 'chronic_acidosis'\n    elif pH > 7.45:\n        if PaCO2 < 38:\n            return 'respiratory_alkalosis'\n        elif HCO3 > 28:\n            return 'metabolic_alkalosis'\n    return 'normal'\n"
In [87]:
#acid_status = []
#
#for i in range(len(X_train_agg1)):
#    pH = X_train_agg1.iloc[i]['pH']
#    PaCO2 = X_train_agg1.iloc[i]['PaCO2']
#    HCO3 = X_train_agg1.iloc[i]['HCO3']
#    status = determine_acid_base(pH, PaCO2, HCO3)
#    acid_status.append(status)
#
#X_train_agg1['AcidBaseStatus1'] = acid_status
In [88]:
#X_train_agg1['AcidBaseStatus1'].value_counts()
In [89]:
#X_train_agg1['AcidBaseStatus'].value_counts()

image.png

I originally intended to integrate the variables of acids and bases, but the combined effect was not good

In [419]:
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler

# FunctionTransformer for acid-base classification
#acidbase_transformer = Pipeline([
#    ("acid_extractor", FunctionTransformer(isAcidBaseDisturb, validate=False)),
#    ("ohe", OneHotEncoder(sparse=False))
#])
#
In [91]:
#X_train_agg1 = X_train_agg.copy()
In [92]:
#X_train_agg1
In [93]:
#X_train_agg1[acidbase_features] = acidbase_transformer.fit_transform(X_train_agg1[acidbase_features])
In [94]:
#result = acidbase_transformer.fit_transform(X_train_agg1[acidbase_features])
#print(result.shape)  
In [95]:
# List of numerical variables (excluding acids and bases)
#um_features = ['Alkalinephos', 'BUN', 'Bilirubin_direct',
#      'Bilirubin_total', 'Calcium', 'Creatinine', 'FiO2',
#      'HR', 'Hgb', 'Lactate', 'MAP', 'Magnesium', 'O2Sat',
#      'PTT', 'Temp', 'WBC', 'Age']
#
#cidbase_features = ['pH', 'PaCO2', 'HCO3']
#
# ColumnTransformer
#reprocessor = ColumnTransformer([
#   ("num", Pipeline([
#       ("imputer", SimpleImputer(strategy="median")),
#       ("scaler", StandardScaler())
#   ]), num_features),
#   
#   ("acid", acidbase_transformer, acidbase_features)
#, remainder='drop')
#
In [96]:
#preprocessor = ColumnTransformer([
#    ("num", Pipeline([
#        ("imputer", SimpleImputer(strategy="median")),
#        ("scaler", StandardScaler())
#    ]), num_features),
#    ("acid", acidbase_transformer, acidbase_features)
#])
#
## 
#X_processed = preprocessor.fit_transform(X_train_agg)
#
## 构
#all_columns = num_features1 + ['acidosis']
#
## DataFrame
#X_processed_df = pd.DataFrame(X_processed, columns=all_columns, index=X_train_agg.index)
#
#print(X_processed_df.head())
In [97]:
#print(X_processed.shape)  
#print(len(all_columns))  
In [98]:
#X_processed = preprocessor.fit_transform(X_train_agg)
#print(X_processed.shape)

Next, the data will be standardized to ensure all features are on a comparable scale.

In [99]:
X_train_agg.shape
Out[99]:
(21524, 21)
In [100]:
X_train_agg2 = X_train_agg.copy()
X_train_agg3 = X_train_agg.copy()
In [101]:
#standar_features = ['Alkalinephos', 'BUN', 'Bilirubin_direct',
#   'Bilirubin_total', 'Calcium', 'Creatinine', 'FiO2', 'HR', 'Hgb', 'Lactate', 'MAP', 'Magnesium', 'O2Sat','PTT', 'Temp', 'WBC', 'Age']
#
#standar_pipeline = Pipeline([
#    ("scaler", StandardScaler())
#])
#
#standar_pipeline.fit_transform(X_train_agg2[standar_features]).shape
In [102]:
X_train_agg.columns
Out[102]:
Index(['Patient_id', 'Alkalinephos', 'BUN', 'BaseExcess', 'Bilirubin_direct',
       'Bilirubin_total', 'Calcium', 'Chloride', 'Creatinine', 'FiO2', 'HR',
       'Hgb', 'MAP', 'Magnesium', 'O2Sat', 'Platelets', 'Temp', 'WBC', 'pH',
       'Age', 'Gender'],
      dtype='object')
In [144]:
# Define numerical values and categorical variables
#num_features =['Alkalinephos', 'BUN', 'BaseExcess', 'Bilirubin_direct',
#       'Bilirubin_total', 'Calcium', 'Chloride', 'Creatinine', 'FiO2', 'HR',
#       'Hgb', 'MAP', 'Magnesium', 'O2Sat', 'Platelets', 'Temp', 'WBC', 'pH',
#       'Age', ]
#cat_features = ['Gender']
#
## ColumnTransformer
#preprocessor = ColumnTransformer([
#    ("num", StandardScaler(), num_features),
#    ("cat", 'passthrough', cat_features) 
#])
#
## It is encapsulated as a whole into the Pipeline
#pipeline1 = Pipeline([
#    ("preprocess", preprocessor)
#])
#
## Proposed consolidation conversion
#X_transformed = pipeline1.fit_transform(X_train_agg3) # This is just a test
#column_names = num_features + cat_features
#X_df = pd.DataFrame(X_transformed, columns=column_names, index=X_train_agg.index)
In [98]:
#X_transformed
In [99]:
#X_df
In [100]:
# To get the train and test
X_train_model = pipeline1.fit_transform(X_train_agg)
X_test_model = pipeline1.fit_transform(X_test_agg)
In [101]:
X_train_model.shape # To validate whether this is true
Out[101]:
(21524, 22)
In [422]:
# X_train_agg.isnull().sum()
In [216]:
X_train_agg.drop(['Bilirubin_direct'], axis=1, inplace=True)

TASK 3¶

In [220]:
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import cross_val_score
import shap
from imblearn.pipeline import Pipeline 
from imblearn.over_sampling import SMOTE
from sklearn.compose import ColumnTransformer, make_column_selector
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from xgboost import XGBClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score
from sklearn.metrics import roc_curve
from sklearn.metrics import RocCurveDisplay
from sklearn.metrics import roc_auc_score
In [222]:
## Step 1: Data cleaning + field processing
#ppl = Pipeline([
#    ('drop_columns', DropFeatures(['Patient_id'])), # Delete the "ID" field
#    ('drop_duplicates', DropDuplicateFeatures()) # Delete the duplicate fields
#])
In [297]:
class DropFeatures(BaseEstimator, TransformerMixin):
    def __init__(self, columns):
        self.columns = columns
        
    def fit(self, X, y=None):
        return self
    
    def transform(self, X):
        return X.drop(columns=self.columns)
In [298]:
class DropDuplicateFeatures(BaseEstimator, TransformerMixin):
    def fit(self, X, y=None):
        self.unique_columns_ = X.T.drop_duplicates().T.columns
        return self

    def transform(self, X):
        return X[self.unique_columns_]
In [341]:
X_train_agg.columns
Out[341]:
Index(['Patient_id', 'Alkalinephos', 'BUN', 'BaseExcess', 'Bilirubin_total',
       'Calcium', 'Chloride', 'Creatinine', 'FiO2', 'HR', 'Hgb', 'MAP',
       'Magnesium', 'O2Sat', 'Platelets', 'Temp', 'WBC', 'pH', 'Age',
       'Gender'],
      dtype='object')
In [299]:
xgb = XGBClassifier(objective='binary:logistic',    # Binary classification task
    eval_metric='auc',              # AUC is a robust evaluation metric for imbalanced data
    use_label_encoder=False,        # Advoid warnings (required for older versions)
    
    n_estimators=10,               # Number of trees,Select a moderate initial value, I selected a moderate initial value
    learning_rate=0.01,             # Connservative learning rate to avoid overfitting
    max_depth=2,                    # Maximum depth of each tree to control complexity
    min_child_weight=10,             # Minimum sum of instance weight (hessian) in a child
    gamma=0.1,                      # Minimum loss reduction required to make a further partition
    subsample=0.5,                  # Subsample ratio of the training instances for each tree
    colsample_bytree=0.8,          # Subsample ratio of columns when constructing each tree, using 0.8 as initial value
    n_jobs = -1,                    # Use all available cores for parallel processing
    reg_alpha=1.0,                # L1 regularization to encourage sparsity
    reg_lambda=1.0,          
    seed=42)        # random seed
In [300]:
print(X_train_agg.dtypes)
Patient_id          object
Alkalinephos       float64
BUN                float64
BaseExcess         float64
Bilirubin_total    float64
Calcium            float64
Chloride           float64
Creatinine         float64
FiO2               float64
HR                 float64
Hgb                float64
MAP                float64
Magnesium          float64
O2Sat              float64
Platelets          float64
Temp               float64
WBC                float64
pH                 float64
Age                float64
Gender               int64
dtype: object
In [301]:
ppl = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
              SimpleImputer(strategy='median')),  
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('smote', SMOTE(random_state=42)),
       # 4. Modelling
    ('XGBoost', xgb)
])
In [302]:
# X_train_agg
In [303]:
set_config(display="diagram")
In [304]:
X_cleaned = ppl.fit(X_train_agg,y_train_agg)
In [305]:
y_pred = ppl.predict(X_test_agg)
In [306]:
fig, ax = plt.subplots(figsize=(3, 3))
ConfusionMatrixDisplay.from_estimator(
    estimator=ppl, 
    X=X_test_agg,
    y=y_test_agg,
    ax=ax
)
plt.title("Confusion Matrix")
plt.grid(False)
plt.show()
No description has been provided for this image
In [307]:
#xgb_Recall = recall_score(y_test_agg, y_pred)
#xgb_Precision = precision_score(y_test_agg, y_pred)
#xgb_f1 = f1_score(y_test_agg, y_pred)
#xgb_accuracy = accuracy_score(y_test_agg, y_pred)
In [308]:
def evaluate_model(y_true, y_preds, model_names):
    """
   Calculate the evaluation metrics (Recall, Precision, F1, Accuracy) of multiple models

   Parameter:
   y_true: True label (Series or array)
   y_preds: List of prediction labels, with each element being the prediction result of the model
   model_names: A list of model names, corresponding to y_preds

   Return:
   The evaluation result in DataFrame format
    """
    results = []
    for y_pred, name in zip(y_preds, model_names):
        recall = recall_score(y_true, y_pred)
        precision = precision_score(y_true, y_pred)
        f1 = f1_score(y_true, y_pred)
        accuracy = accuracy_score(y_true, y_pred)
        results.append([name, recall, precision, f1, accuracy])

    df = pd.DataFrame(results, columns=['Model', 'Recall', 'Precision', 'F1 Score', 'Accuracy'])
    return df
In [309]:
#xgb_1 = [(xgb_Recall, xgb_Precision, xgb_f1, xgb_accuracy)]
#xgb_1_score = pd.DataFrame(data = xgb_1, columns=['Recall','Precision','F1 Score', 'Accuracy'])
#xgb_1_score.insert(0, 'Xgboost', 'SMOTE')
#xgb_1_score
In [310]:
model_names = ['XGBoost + SMOTE']
xgb_scores = evaluate_model(y_test_agg, [y_pred], model_names)
print(xgb_scores)
             Model    Recall  Precision  F1 Score  Accuracy
0  XGBoost + SMOTE  0.661538   0.134867  0.224056  0.668337

Model Evaluation Metrics¶

To evaluate the model performance, I adopted a range of standard metrics taught in the course, including:

  • Recall: Measures the model's ability to correctly identify all positive samples. This is especially important in high-risk domains such as healthcare, where missing a positive case can be criti cal.

  • Precision: Measures how many of the samples predicted as positive are actually positive. It helps to assess the model’s reliability in making positive predict ions.

  • F1 Score: The harmonic mean of precision and recall, providing a balanced measure especially suitable when dealing with imbalanced datasets.

  • Accuracy: The proportion of total correct predictions. While useful in balanced datasets, accuracy can be misleading in highly imbalanced sce narios.

  • Confusion Matrix: Gives a breakdown of true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN), offering a detailed error a nlysis.

    Additionally, I included the ROC Curve (Receiver Operating Charaeristic) and calculated the AUC (Area Under the Curve) score.

Pipeline and Imbalanced Data Handling¶

From data cleaning to model training, I constructed a streamlined pipeline using sklearn.pipeline.Pipeline. Although data imputation is typically included within the pipeline, in this case, the imputation step was handled beforehand due to slight data complexity. As a result, the imputation module was commented out in the pipeline.

To handle data imbalance, I applied SMOTE (Synthetic Minority Oversampling Technique) within the pipeline. Other resampling strategies such as Random Undersampling or combined approaches (e.g., SMOTE + Tomek Links) could also be considered to improve performance.

Hyperparameter T uning with Grid Search¶

The initial model showed a Recall of 0.61 and an F1 Score of 0.14, suggesting room for improvement. To optimize the model’s performance, I applied GridSearchCV for hyperparameter tuning. The grid search was combined with K-Fold cross-validation (CV) as the resampling strategy to ensure reliable model selection and generalization.

In [311]:
ROCAUCscore = roc_auc_score(y_test_agg, y_pred)
print(f"AUC-ROC Curve for Random Forest with Class weights: {ROCAUCscore:.4f}")
AUC-ROC Curve for Random Forest with Class weights: 0.6652
In [312]:
y_proba = ppl.predict_proba(X_test_agg)

def plot_auc_roc_curve(y_test_agg, y_pred):
    fpr, tpr, _ = roc_curve(y_test_agg, y_pred)
    roc_display = RocCurveDisplay(fpr=fpr, tpr=tpr).plot()
    roc_display.figure_.set_size_inches(5,5)
    plt.plot([0, 1], [0, 1], color = 'g')
# Plots the ROC curve using the sklearn methods - Good plot
plot_auc_roc_curve(y_test_agg, y_proba[:, 1])
No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No description has been provided for this image

According to the operation results of the Pipeline, the recall rate of the model is 0.61, and the F1 value is only 0.14, indicating that the model has deficiencies in balancing the precision rate and recall rate, and the overall performance needs to be improved.

To further optimize the model performance, I adopted Grid Search to systematically tune the hyperparameters and used K-fold cross-validation as the resampling method to ensure the stability and generalization ability of the model under different data partitions.

Hyperparameter Optimization Strategy¶

Based on the tutorial materials(https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/) and practical considerations, I avoided tuning all hyperparameters simultaneously using grid search, as this would lead to a combinatorial explosion of parameter combinations and significantly increase computational cost.

Instead, I adopted a stepwise tuning strategy, where key hyperparameters were adjusted in stages. This approach balances model performance with efficiency and makes it easier to interpret the effect of individual parameters on the final results.

Step1:learning_rate,n_estimators,max_depth,min_child_weight¶

In [432]:
param_grid1 = {
    'XGBoost__learning_rate': [0.01, 0.05, 0.1],             # learning rate
    # 'XGBoost__n_estimators': [100, 200, 300],
    'XGBoost__n_estimators': [400,500,600] ,
    'XGBoost__max_depth': range(3,10,2),                         # depth of tree
    'XGBoost__min_child_weight': range(1,6,2)  

}
In [314]:
# Define the cross-validation strategy
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=2, random_state=42)
In [315]:
print(ppl.named_steps)
{'drop_id': DropFeatures(columns=['Patient_id']), 'drop_duplicates': DropDuplicateFeatures(), 'cleaning': ColumnTransformer(transformers=[('num',
                                 Pipeline(steps=[('standardscaler',
                                                  StandardScaler()),
                                                 ('simpleimputer',
                                                  SimpleImputer(strategy='median'))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27520>),
                                ('cat',
                                 Pipeline(steps=[('onehotencoder',
                                                  OneHotEncoder(handle_unknown='ignore',
                                                                sparse=False))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27700>)]), 'smote': SMOTE(random_state=42), 'XGBoost': XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.1, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.01, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=2, max_leaves=None,
              min_child_weight=10, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=10, n_jobs=-1,
              num_parallel_tree=None, random_state=None, ...)}
In [316]:
# Gridsearch 
clf1 = GridSearchCV(
    estimator=ppl,
    param_grid=param_grid1,scoring='recall', verbose=2, cv=cv)
# To fit the model
clf1.fit(X_train_agg, y_train_agg)
Fitting 10 folds for each of 108 candidates, totalling 1080 fits
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   8.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   9.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   9.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=  10.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   9.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   9.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   9.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.4s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.3s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.2s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.6s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.01, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.1s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   8.0s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.8s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.9s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.3s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.6s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.4s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.2s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.5s
[CV] END XGBoost__learning_rate=0.05, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   3.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   3.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=3, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   4.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   4.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=5, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   5.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=7, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=400; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=500; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   9.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   8.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=1, XGBoost__n_estimators=600; total time=   7.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=400; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   8.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   7.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   9.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=500; total time=   6.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   8.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=3, XGBoost__n_estimators=600; total time=   7.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.8s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   5.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   7.0s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=400; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.6s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   6.5s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=500; total time=   7.7s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.4s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   6.9s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.1s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.3s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   8.2s
[CV] END XGBoost__learning_rate=0.1, XGBoost__max_depth=9, XGBoost__min_child_weight=5, XGBoost__n_estimators=600; total time=   7.3s
Out[316]:
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      max_leaves=None,
                                                      min_child_weight=10,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=10,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             param_grid={'XGBoost__learning_rate': [0.01, 0.05, 0.1],
                         'XGBoost__max_depth': range(3, 10, 2),
                         'XGBoost__min_child_weight': range(1, 6, 2),
                         'XGBoost__n_estimators': [400, 500, 600]},
             scoring='recall', verbose=2)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      max_leaves=None,
                                                      min_child_weight=10,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=10,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             param_grid={'XGBoost__learning_rate': [0.01, 0.05, 0.1],
                         'XGBoost__max_depth': range(3, 10, 2),
                         'XGBoost__min_child_weight': range(1, 6, 2),
                         'XGBoost__n_estimators': [400, 500, 600]},
             scoring='recall', verbose=2)
Pipeline(steps=[('drop_id', DropFeatures(columns=['Patient_id'])),
                ('drop_duplicates', DropDuplicateFeatures()),
                ('cleaning',
                 ColumnTransformer(transformers=[('num',
                                                  Pipeline(steps=[('standardscaler',
                                                                   StandardScaler()),
                                                                  ('simpleimputer',
                                                                   SimpleImputer(strategy='median'))]),
                                                  <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27520>)...
                               feature_types=None, gamma=0.1, grow_policy=None,
                               importance_type=None,
                               interaction_constraints=None, learning_rate=0.01,
                               max_bin=None, max_cat_threshold=None,
                               max_cat_to_onehot=None, max_delta_step=None,
                               max_depth=2, max_leaves=None,
                               min_child_weight=10, missing=nan,
                               monotone_constraints=None, multi_strategy=None,
                               n_estimators=10, n_jobs=-1,
                               num_parallel_tree=None, random_state=None, ...))])
DropFeatures(columns=['Patient_id'])
DropDuplicateFeatures()
ColumnTransformer(transformers=[('num',
                                 Pipeline(steps=[('standardscaler',
                                                  StandardScaler()),
                                                 ('simpleimputer',
                                                  SimpleImputer(strategy='median'))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27520>),
                                ('cat',
                                 Pipeline(steps=[('onehotencoder',
                                                  OneHotEncoder(handle_unknown='ignore',
                                                                sparse=False))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27700>)])
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27520>
StandardScaler()
SimpleImputer(strategy='median')
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285C5C27700>
OneHotEncoder(handle_unknown='ignore', sparse=False)
SMOTE(random_state=42)
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.1, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.01, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=2, max_leaves=None,
              min_child_weight=10, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=10, n_jobs=-1,
              num_parallel_tree=None, random_state=None, ...)
In [317]:
clf1.cv_results_, clf1.best_params_, clf1.best_score_
Out[317]:
({'mean_fit_time': array([5.30311029, 5.55209999, 5.86732571, 5.23311749, 5.51355805,
         5.68679118, 5.23574743, 5.62046824, 5.91431861, 5.69814529,
         6.10328574, 6.44847593, 5.66476281, 6.02553115, 6.56841311,
         5.68625505, 6.06659188, 6.50020778, 6.46675463, 6.95482366,
         7.56678176, 6.35484672, 6.99254115, 7.54090655, 6.36544335,
         6.8458827 , 7.26550322, 7.46722794, 8.27638674, 9.09019239,
         7.2557179 , 7.97590907, 8.44604516, 7.01101143, 7.6410974 ,
         8.27086155, 5.27367623, 5.42260566, 5.68023248, 5.13094685,
         5.38685303, 5.64306796, 5.11227858, 5.40407627, 5.58275082,
         5.61580365, 5.98971004, 6.37997189, 5.5014761 , 5.99670024,
         6.38252647, 5.56298668, 5.94327912, 6.34734578, 6.34726088,
         6.91864369, 7.40944822, 6.14472294, 6.67317605, 7.16024988,
         6.04876583, 6.59517365, 7.0653024 , 7.16977139, 7.93441391,
         8.58179429, 6.83230097, 7.51105356, 8.19738929, 6.64176252,
         7.29862583, 7.92818677, 5.03934789, 5.31910157, 5.51138666,
         4.85504756, 5.27161958, 5.38617795, 5.01251557, 5.29165039,
         5.26162026, 5.5133765 , 5.95361199, 6.38038712, 5.54991283,
         5.85854006, 6.31845295, 5.21326904, 5.8808774 , 6.29771147,
         6.14048715, 6.79678352, 7.39448776, 6.08349423, 6.60470872,
         7.11228046, 6.08219836, 6.73040495, 7.20669339, 7.14105606,
         7.77316723, 8.41519849, 6.80665114, 7.58005614, 8.07300873,
         6.6006721 , 7.12477846, 7.74901454]),
  'std_fit_time': array([0.60296445, 0.58685531, 0.58269391, 0.62815715, 0.55333083,
         0.58132886, 0.58785083, 0.61864304, 0.57009674, 0.57167185,
         0.59213249, 0.64890229, 0.61456924, 0.61725971, 0.59143664,
         0.59102748, 0.64574634, 0.56534017, 0.59693094, 0.6237276 ,
         0.60072839, 0.56551888, 0.64155689, 0.58845946, 0.57042364,
         0.66356585, 0.56008602, 0.55312294, 0.92133698, 0.78566274,
         0.65795655, 0.76496704, 0.55984036, 0.58324883, 0.59360779,
         0.58337837, 0.65566166, 0.5911914 , 0.59832511, 0.62396815,
         0.60452152, 0.59715785, 0.60659231, 0.6277348 , 0.60624351,
         0.60892899, 0.6059968 , 0.62409374, 0.55565533, 0.62146368,
         0.60851188, 0.59880326, 0.61549436, 0.60213065, 0.58930072,
         0.62414492, 0.62867974, 0.60340906, 0.61417486, 0.54972269,
         0.59952288, 0.59622232, 0.57038744, 0.63190176, 0.66320922,
         0.5688553 , 0.56023984, 0.59847212, 0.58441606, 0.56652115,
         0.60051694, 0.61207185, 0.6133    , 0.58294343, 0.66002877,
         0.63961979, 0.60390149, 0.57478922, 0.63575073, 0.59806908,
         0.72205001, 0.54056132, 0.55890407, 0.57477034, 0.56244044,
         0.59827036, 0.59416426, 0.60646596, 0.53255364, 0.61524529,
         0.62342469, 0.57667309, 0.5876533 , 0.58583504, 0.60237868,
         0.60926632, 0.61989015, 0.62230507, 0.55370721, 0.6108273 ,
         0.57499606, 0.56380275, 0.55213938, 0.89932688, 0.59953758,
         0.53717106, 0.55867512, 0.58824106]),
  'mean_score_time': array([0.02713559, 0.0291306 , 0.02966125, 0.02713225, 0.0289119 ,
         0.03030705, 0.02722852, 0.02756627, 0.03010774, 0.03102596,
         0.0332617 , 0.03504815, 0.03274081, 0.03420894, 0.03874233,
         0.03243179, 0.03306432, 0.03674045, 0.03834145, 0.04106297,
         0.04544835, 0.0396462 , 0.04315023, 0.04780025, 0.03884249,
         0.04412165, 0.04627259, 0.04560421, 0.05395894, 0.06009643,
         0.04785697, 0.0545191 , 0.05874002, 0.04852166, 0.05443664,
         0.0603121 , 0.0264307 , 0.02830441, 0.02988369, 0.02662928,
         0.02846608, 0.02947867, 0.02660513, 0.02745388, 0.02975218,
         0.03198988, 0.0342139 , 0.0374136 , 0.03186269, 0.03561454,
         0.03748512, 0.03172369, 0.03456833, 0.03716054, 0.03845112,
         0.04230103, 0.04631395, 0.03877537, 0.0429492 , 0.0466728 ,
         0.0385057 , 0.04326363, 0.04766574, 0.04627435, 0.05378599,
         0.06064653, 0.04645457, 0.05477901, 0.06190648, 0.04704993,
         0.05458467, 0.06112697, 0.02596433, 0.02756495, 0.02880983,
         0.02535188, 0.02775297, 0.02868576, 0.02655811, 0.0276829 ,
         0.02904675, 0.03182361, 0.03382809, 0.03745873, 0.03157353,
         0.03422348, 0.03767405, 0.03120959, 0.03503311, 0.03791785,
         0.03894391, 0.04286234, 0.04544199, 0.03740592, 0.04299347,
         0.04621089, 0.03893874, 0.04588075, 0.04934134, 0.04790812,
         0.05413299, 0.0619992 , 0.04856403, 0.05726931, 0.06397738,
         0.04830835, 0.05474002, 0.06103535]),
  'std_score_time': array([0.00108799, 0.00094637, 0.00328512, 0.00215014, 0.00130169,
         0.00140524, 0.00132956, 0.00219864, 0.00234074, 0.00158961,
         0.00199791, 0.00148286, 0.00122188, 0.00285969, 0.00104582,
         0.00295143, 0.00210179, 0.00146782, 0.00103329, 0.00329969,
         0.00194403, 0.00137867, 0.00109752, 0.00226325, 0.00105575,
         0.00352012, 0.00175024, 0.00178177, 0.00298135, 0.00294122,
         0.00150182, 0.00167904, 0.00249235, 0.00249933, 0.00273871,
         0.00156402, 0.00084063, 0.00044819, 0.00081982, 0.00084255,
         0.00083313, 0.00062492, 0.00116634, 0.00167598, 0.00093767,
         0.00111418, 0.00075361, 0.00108666, 0.00072839, 0.00137002,
         0.00087116, 0.00118225, 0.00134631, 0.00078351, 0.00057697,
         0.00163384, 0.00131807, 0.00105388, 0.0011085 , 0.00133228,
         0.00105937, 0.00091338, 0.00164271, 0.00234377, 0.00059783,
         0.00283707, 0.00171151, 0.00297125, 0.00312949, 0.00118575,
         0.00164892, 0.00127705, 0.00094235, 0.00060223, 0.00128118,
         0.0008775 , 0.00170301, 0.00155944, 0.00101493, 0.00079016,
         0.00072286, 0.00110505, 0.00090742, 0.001276  , 0.00072618,
         0.00107794, 0.00261158, 0.00186835, 0.00192315, 0.00092619,
         0.00192558, 0.0009271 , 0.00258074, 0.00104799, 0.00149589,
         0.00338349, 0.00091274, 0.0030802 , 0.00160181, 0.00160044,
         0.00225648, 0.00311155, 0.00131239, 0.00297471, 0.00254943,
         0.00164324, 0.00234924, 0.00452147]),
  'param_XGBoost__learning_rate': masked_array(data=[0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
                     0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
                     0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
                     0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
                     0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 0.1, 0.1],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__max_depth': masked_array(data=[3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 5, 5,
                     7, 7, 7, 7, 7, 7, 7, 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9,
                     3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 5, 5,
                     7, 7, 7, 7, 7, 7, 7, 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9,
                     3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 5, 5,
                     7, 7, 7, 7, 7, 7, 7, 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__min_child_weight': masked_array(data=[1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5,
                     1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5,
                     1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5,
                     1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5,
                     1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5,
                     1, 1, 1, 3, 3, 3, 5, 5, 5, 1, 1, 1, 3, 3, 3, 5, 5, 5],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__n_estimators': masked_array(data=[400, 500, 600, 400, 500, 600, 400, 500, 600, 400, 500,
                     600, 400, 500, 600, 400, 500, 600, 400, 500, 600, 400,
                     500, 600, 400, 500, 600, 400, 500, 600, 400, 500, 600,
                     400, 500, 600, 400, 500, 600, 400, 500, 600, 400, 500,
                     600, 400, 500, 600, 400, 500, 600, 400, 500, 600, 400,
                     500, 600, 400, 500, 600, 400, 500, 600, 400, 500, 600,
                     400, 500, 600, 400, 500, 600, 400, 500, 600, 400, 500,
                     600, 400, 500, 600, 400, 500, 600, 400, 500, 600, 400,
                     500, 600, 400, 500, 600, 400, 500, 600, 400, 500, 600,
                     400, 500, 600, 400, 500, 600, 400, 500, 600],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'params': [{'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.01,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.05,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 3,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 5,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 7,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 1,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 3,
    'XGBoost__n_estimators': 600},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 400},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 500},
   {'XGBoost__learning_rate': 0.1,
    'XGBoost__max_depth': 9,
    'XGBoost__min_child_weight': 5,
    'XGBoost__n_estimators': 600}],
  'split0_test_score': array([0.38291139, 0.36392405, 0.34493671, 0.38291139, 0.36392405,
         0.34810127, 0.37974684, 0.36392405, 0.35126582, 0.26898734,
         0.24683544, 0.23101266, 0.2721519 , 0.24683544, 0.23101266,
         0.27531646, 0.25949367, 0.24367089, 0.24050633, 0.20886076,
         0.19936709, 0.24050633, 0.21202532, 0.20253165, 0.23417722,
         0.2056962 , 0.19620253, 0.2056962 , 0.18670886, 0.17088608,
         0.20253165, 0.1835443 , 0.17088608, 0.21202532, 0.19303797,
         0.1835443 , 0.19303797, 0.18037975, 0.16772152, 0.18987342,
         0.17088608, 0.16455696, 0.20253165, 0.18987342, 0.17721519,
         0.14873418, 0.16139241, 0.16455696, 0.15506329, 0.15822785,
         0.15506329, 0.16455696, 0.15189873, 0.15506329, 0.14873418,
         0.15506329, 0.15822785, 0.16455696, 0.16139241, 0.17088608,
         0.15822785, 0.16455696, 0.16772152, 0.13924051, 0.13291139,
         0.13924051, 0.15189873, 0.14556962, 0.13607595, 0.15822785,
         0.14873418, 0.15189873, 0.15506329, 0.16139241, 0.14556962,
         0.15822785, 0.16139241, 0.14556962, 0.14873418, 0.15189873,
         0.13924051, 0.15506329, 0.16772152, 0.17721519, 0.15506329,
         0.17088608, 0.18670886, 0.17088608, 0.17721519, 0.18670886,
         0.15822785, 0.16772152, 0.16772152, 0.16455696, 0.16139241,
         0.15189873, 0.15822785, 0.16772152, 0.16139241, 0.14556962,
         0.15189873, 0.15189873, 0.16139241, 0.16772152, 0.15506329,
         0.14556962, 0.14240506, 0.15506329]),
  'split1_test_score': array([0.40822785, 0.38924051, 0.38607595, 0.40822785, 0.39240506,
         0.38291139, 0.40189873, 0.39240506, 0.38291139, 0.29113924,
         0.26898734, 0.26582278, 0.28164557, 0.2721519 , 0.25316456,
         0.27848101, 0.2721519 , 0.25632911, 0.25949367, 0.2278481 ,
         0.21518987, 0.26582278, 0.23734177, 0.22151899, 0.25949367,
         0.23101266, 0.22151899, 0.22468354, 0.19936709, 0.18037975,
         0.23417722, 0.2056962 , 0.1835443 , 0.23417722, 0.21202532,
         0.19303797, 0.23734177, 0.21202532, 0.2056962 , 0.21518987,
         0.19936709, 0.20253165, 0.23101266, 0.21518987, 0.21202532,
         0.17088608, 0.16139241, 0.16455696, 0.16139241, 0.16772152,
         0.17405063, 0.16772152, 0.16455696, 0.16772152, 0.14873418,
         0.16139241, 0.15189873, 0.17405063, 0.17721519, 0.17405063,
         0.16455696, 0.17405063, 0.17405063, 0.14873418, 0.14240506,
         0.13607595, 0.14873418, 0.15189873, 0.15822785, 0.15189873,
         0.14556962, 0.14556962, 0.17721519, 0.16772152, 0.17405063,
         0.18037975, 0.1835443 , 0.17088608, 0.18037975, 0.1835443 ,
         0.17088608, 0.17088608, 0.17721519, 0.18037975, 0.17721519,
         0.17721519, 0.17405063, 0.17405063, 0.17721519, 0.1835443 ,
         0.14556962, 0.15822785, 0.15822785, 0.16139241, 0.15822785,
         0.16455696, 0.16455696, 0.16772152, 0.16139241, 0.13607595,
         0.14240506, 0.14240506, 0.13924051, 0.15506329, 0.15506329,
         0.14240506, 0.13607595, 0.14240506]),
  'split2_test_score': array([0.31545741, 0.30914826, 0.30283912, 0.31545741, 0.29968454,
         0.29652997, 0.31545741, 0.30914826, 0.30283912, 0.2555205 ,
         0.24605678, 0.23659306, 0.2555205 , 0.24290221, 0.23028391,
         0.24605678, 0.23659306, 0.23659306, 0.24290221, 0.21766562,
         0.20820189, 0.23343849, 0.22397476, 0.21135647, 0.23343849,
         0.22397476, 0.22082019, 0.22082019, 0.20820189, 0.1829653 ,
         0.22397476, 0.20820189, 0.1955836 , 0.22712934, 0.22397476,
         0.19242902, 0.21135647, 0.19873817, 0.19242902, 0.22082019,
         0.19873817, 0.1829653 , 0.21766562, 0.19242902, 0.19242902,
         0.16088328, 0.15141956, 0.13880126, 0.16403785, 0.16088328,
         0.15772871, 0.16403785, 0.15457413, 0.14511041, 0.13880126,
         0.13564669, 0.12933754, 0.14826498, 0.15141956, 0.14511041,
         0.17350158, 0.15772871, 0.15457413, 0.14511041, 0.13249211,
         0.12618297, 0.16403785, 0.16719243, 0.16088328, 0.14826498,
         0.15772871, 0.15457413, 0.15772871, 0.15141956, 0.15141956,
         0.15457413, 0.14195584, 0.15772871, 0.17350158, 0.15772871,
         0.15457413, 0.15141956, 0.16088328, 0.14826498, 0.15772871,
         0.170347  , 0.170347  , 0.15457413, 0.17350158, 0.15457413,
         0.13564669, 0.15457413, 0.14826498, 0.170347  , 0.16403785,
         0.15772871, 0.14826498, 0.15141956, 0.16403785, 0.14511041,
         0.14195584, 0.14826498, 0.14826498, 0.15141956, 0.13880126,
         0.15457413, 0.16403785, 0.16088328]),
  'split3_test_score': array([0.33438486, 0.30599369, 0.29968454, 0.32492114, 0.30283912,
         0.29652997, 0.32492114, 0.30914826, 0.29652997, 0.26182965,
         0.23343849, 0.21135647, 0.26182965, 0.24605678, 0.23343849,
         0.25867508, 0.24290221, 0.21135647, 0.22712934, 0.18611987,
         0.18927445, 0.22712934, 0.20189274, 0.18927445, 0.22082019,
         0.20820189, 0.19873817, 0.17665615, 0.15141956, 0.14195584,
         0.1829653 , 0.16088328, 0.15772871, 0.18927445, 0.17665615,
         0.15772871, 0.20504732, 0.17350158, 0.16088328, 0.20504732,
         0.18611987, 0.170347  , 0.20820189, 0.17981073, 0.15772871,
         0.15141956, 0.14195584, 0.14511041, 0.14511041, 0.12618297,
         0.11987382, 0.15141956, 0.14195584, 0.13880126, 0.15772871,
         0.14511041, 0.13564669, 0.15457413, 0.14511041, 0.14511041,
         0.13880126, 0.14195584, 0.14826498, 0.12618297, 0.11671924,
         0.11671924, 0.12302839, 0.13564669, 0.13880126, 0.14826498,
         0.15141956, 0.15141956, 0.13880126, 0.13880126, 0.13880126,
         0.13564669, 0.12302839, 0.12302839, 0.16088328, 0.13249211,
         0.14195584, 0.13249211, 0.14511041, 0.13880126, 0.14511041,
         0.14195584, 0.14826498, 0.12933754, 0.13564669, 0.14826498,
         0.13564669, 0.15457413, 0.15141956, 0.15141956, 0.14511041,
         0.15141956, 0.14511041, 0.14511041, 0.15457413, 0.13564669,
         0.12933754, 0.12933754, 0.15141956, 0.15141956, 0.14195584,
         0.14511041, 0.13880126, 0.14195584]),
  'split4_test_score': array([0.36075949, 0.34493671, 0.32594937, 0.35759494, 0.35126582,
         0.32911392, 0.36075949, 0.34810127, 0.31962025, 0.26582278,
         0.23101266, 0.20886076, 0.26265823, 0.22468354, 0.21202532,
         0.26265823, 0.23417722, 0.21202532, 0.21202532, 0.2056962 ,
         0.19303797, 0.2278481 , 0.2056962 , 0.18987342, 0.2278481 ,
         0.20886076, 0.19303797, 0.22151899, 0.19936709, 0.18037975,
         0.22468354, 0.19620253, 0.1835443 , 0.21518987, 0.19620253,
         0.17721519, 0.19620253, 0.19303797, 0.18037975, 0.18670886,
         0.1835443 , 0.18987342, 0.18987342, 0.1835443 , 0.17405063,
         0.15189873, 0.14873418, 0.13924051, 0.15506329, 0.13924051,
         0.13291139, 0.15506329, 0.14240506, 0.13291139, 0.13924051,
         0.14240506, 0.13291139, 0.15506329, 0.14556962, 0.14556962,
         0.14556962, 0.14873418, 0.14556962, 0.14556962, 0.14240506,
         0.14556962, 0.16772152, 0.15822785, 0.15822785, 0.15506329,
         0.15822785, 0.17088608, 0.15506329, 0.15189873, 0.13924051,
         0.16455696, 0.15506329, 0.15822785, 0.16772152, 0.15189873,
         0.13924051, 0.14556962, 0.14240506, 0.13924051, 0.14873418,
         0.13924051, 0.13291139, 0.15189873, 0.14873418, 0.16139241,
         0.11392405, 0.12025316, 0.12658228, 0.16455696, 0.15189873,
         0.15506329, 0.16139241, 0.15506329, 0.15506329, 0.12974684,
         0.13607595, 0.13924051, 0.15189873, 0.15189873, 0.15189873,
         0.17088608, 0.16139241, 0.16772152]),
  'split5_test_score': array([0.36392405, 0.34177215, 0.32911392, 0.36708861, 0.34177215,
         0.32911392, 0.36075949, 0.34177215, 0.32594937, 0.30063291,
         0.28481013, 0.26265823, 0.30379747, 0.27848101, 0.26582278,
         0.30379747, 0.28481013, 0.25949367, 0.25      , 0.22468354,
         0.18987342, 0.25949367, 0.22468354, 0.19936709, 0.25632911,
         0.23417722, 0.20886076, 0.19303797, 0.18037975, 0.16139241,
         0.20886076, 0.19936709, 0.17721519, 0.19936709, 0.19303797,
         0.1835443 , 0.19620253, 0.18037975, 0.17088608, 0.2056962 ,
         0.18670886, 0.17721519, 0.20253165, 0.1835443 , 0.17405063,
         0.14556962, 0.14556962, 0.13924051, 0.14556962, 0.13291139,
         0.13607595, 0.15822785, 0.12974684, 0.12658228, 0.13291139,
         0.13291139, 0.13291139, 0.14240506, 0.13924051, 0.13291139,
         0.13924051, 0.14873418, 0.14240506, 0.13291139, 0.12025316,
         0.12341772, 0.13607595, 0.11708861, 0.12025316, 0.13291139,
         0.11708861, 0.12025316, 0.14556962, 0.13291139, 0.12974684,
         0.16455696, 0.14240506, 0.13291139, 0.15506329, 0.14556962,
         0.12974684, 0.12341772, 0.13291139, 0.14556962, 0.13291139,
         0.14240506, 0.13607595, 0.14240506, 0.13924051, 0.14873418,
         0.13924051, 0.12974684, 0.12658228, 0.13291139, 0.14556962,
         0.13924051, 0.11392405, 0.13291139, 0.13291139, 0.12974684,
         0.13291139, 0.12974684, 0.12341772, 0.12025316, 0.11708861,
         0.16455696, 0.15822785, 0.14873418]),
  'split6_test_score': array([0.31329114, 0.31012658, 0.31329114, 0.33544304, 0.32278481,
         0.31962025, 0.32278481, 0.3164557 , 0.3164557 , 0.24683544,
         0.23417722, 0.21835443, 0.24683544, 0.23734177, 0.22151899,
         0.25      , 0.23734177, 0.21518987, 0.2056962 , 0.18670886,
         0.1835443 , 0.19936709, 0.18987342, 0.18037975, 0.21202532,
         0.20253165, 0.1835443 , 0.18670886, 0.16139241, 0.14873418,
         0.18037975, 0.16139241, 0.16139241, 0.1835443 , 0.16455696,
         0.15822785, 0.19620253, 0.18670886, 0.16455696, 0.1835443 ,
         0.16455696, 0.15822785, 0.18670886, 0.18037975, 0.16455696,
         0.14240506, 0.14240506, 0.12974684, 0.13291139, 0.12341772,
         0.12341772, 0.13924051, 0.14240506, 0.12974684, 0.13607595,
         0.13607595, 0.13291139, 0.13291139, 0.12974684, 0.12341772,
         0.13924051, 0.13291139, 0.12658228, 0.11708861, 0.11708861,
         0.11075949, 0.11708861, 0.13291139, 0.12341772, 0.12974684,
         0.12974684, 0.12974684, 0.13924051, 0.14556962, 0.13924051,
         0.16139241, 0.15822785, 0.16139241, 0.14873418, 0.13924051,
         0.14556962, 0.13291139, 0.14240506, 0.13924051, 0.14556962,
         0.14240506, 0.15506329, 0.14873418, 0.14873418, 0.14556962,
         0.13607595, 0.13924051, 0.13924051, 0.14873418, 0.13291139,
         0.14556962, 0.15506329, 0.14873418, 0.16772152, 0.12025316,
         0.13607595, 0.13291139, 0.14556962, 0.14240506, 0.14240506,
         0.13291139, 0.13291139, 0.12658228]),
  'split7_test_score': array([0.39432177, 0.37539432, 0.35962145, 0.38801262, 0.37539432,
         0.35962145, 0.39116719, 0.37539432, 0.36277603, 0.29652997,
         0.28391167, 0.27129338, 0.29652997, 0.29022082, 0.27444795,
         0.30283912, 0.28391167, 0.2807571 , 0.25867508, 0.22712934,
         0.22082019, 0.24921136, 0.23028391, 0.21766562, 0.25236593,
         0.23343849, 0.22397476, 0.22712934, 0.20504732, 0.19242902,
         0.21451104, 0.21135647, 0.20820189, 0.22712934, 0.22397476,
         0.20504732, 0.24921136, 0.23343849, 0.21766562, 0.23974763,
         0.21766562, 0.21135647, 0.25236593, 0.23659306, 0.20820189,
         0.1829653 , 0.17981073, 0.18611987, 0.18927445, 0.18927445,
         0.18611987, 0.19242902, 0.18927445, 0.1955836 , 0.15772871,
         0.16719243, 0.170347  , 0.18927445, 0.17665615, 0.17981073,
         0.19242902, 0.19873817, 0.19873817, 0.15141956, 0.15141956,
         0.15141956, 0.17665615, 0.16403785, 0.16403785, 0.17665615,
         0.170347  , 0.1829653 , 0.1955836 , 0.19242902, 0.19873817,
         0.19242902, 0.20504732, 0.20504732, 0.19242902, 0.21135647,
         0.21135647, 0.1829653 , 0.17981073, 0.170347  , 0.16088328,
         0.170347  , 0.170347  , 0.18611987, 0.1955836 , 0.20189274,
         0.15457413, 0.170347  , 0.17350158, 0.170347  , 0.17981073,
         0.16719243, 0.17665615, 0.17665615, 0.170347  , 0.15772871,
         0.16403785, 0.16403785, 0.170347  , 0.17350158, 0.170347  ,
         0.17350158, 0.18611987, 0.170347  ]),
  'split8_test_score': array([0.33438486, 0.33123028, 0.32492114, 0.33438486, 0.34069401,
         0.32176656, 0.34069401, 0.33438486, 0.32176656, 0.22082019,
         0.21766562, 0.20504732, 0.22712934, 0.21766562, 0.21451104,
         0.22082019, 0.20820189, 0.21766562, 0.21451104, 0.19242902,
         0.16719243, 0.21135647, 0.19242902, 0.17350158, 0.20820189,
         0.19242902, 0.17981073, 0.20189274, 0.170347  , 0.14195584,
         0.20504732, 0.1829653 , 0.170347  , 0.20820189, 0.17981073,
         0.16719243, 0.18927445, 0.170347  , 0.17350158, 0.1955836 ,
         0.170347  , 0.16403785, 0.18927445, 0.17350158, 0.16403785,
         0.14195584, 0.14195584, 0.15457413, 0.15141956, 0.14826498,
         0.15457413, 0.14826498, 0.14826498, 0.15141956, 0.12618297,
         0.12933754, 0.12302839, 0.13564669, 0.13564669, 0.12302839,
         0.13564669, 0.13880126, 0.14195584, 0.12302839, 0.11987382,
         0.11987382, 0.13249211, 0.12618297, 0.11671924, 0.13564669,
         0.12933754, 0.12933754, 0.15457413, 0.15772871, 0.15141956,
         0.16719243, 0.16719243, 0.16719243, 0.170347  , 0.17350158,
         0.170347  , 0.14826498, 0.13880126, 0.14195584, 0.16088328,
         0.16403785, 0.16088328, 0.16403785, 0.16403785, 0.16088328,
         0.15141956, 0.15141956, 0.16403785, 0.13564669, 0.14195584,
         0.13880126, 0.14826498, 0.14195584, 0.16403785, 0.11671924,
         0.11987382, 0.12933754, 0.13880126, 0.14195584, 0.13880126,
         0.16403785, 0.16088328, 0.16403785]),
  'split9_test_score': array([0.38291139, 0.33544304, 0.32911392, 0.37974684, 0.33860759,
         0.33227848, 0.37974684, 0.33860759, 0.32911392, 0.27531646,
         0.25      , 0.2278481 , 0.28164557, 0.25949367, 0.23417722,
         0.28164557, 0.25949367, 0.2278481 , 0.21518987, 0.20886076,
         0.20886076, 0.21835443, 0.21202532, 0.20253165, 0.21835443,
         0.21202532, 0.2056962 , 0.19936709, 0.1835443 , 0.17721519,
         0.19936709, 0.1835443 , 0.1835443 , 0.2056962 , 0.18670886,
         0.1835443 , 0.21518987, 0.19303797, 0.18670886, 0.22468354,
         0.20886076, 0.19936709, 0.22468354, 0.19936709, 0.19936709,
         0.15822785, 0.14556962, 0.15506329, 0.15822785, 0.15822785,
         0.16455696, 0.17088608, 0.16772152, 0.17088608, 0.17088608,
         0.16455696, 0.15822785, 0.17405063, 0.16772152, 0.16772152,
         0.16772152, 0.15506329, 0.16772152, 0.15506329, 0.15506329,
         0.15822785, 0.16772152, 0.16455696, 0.17088608, 0.16455696,
         0.17088608, 0.17088608, 0.18670886, 0.1835443 , 0.1835443 ,
         0.17721519, 0.17405063, 0.17721519, 0.19303797, 0.17405063,
         0.17405063, 0.17721519, 0.18037975, 0.16772152, 0.17088608,
         0.17405063, 0.17405063, 0.17088608, 0.17721519, 0.17088608,
         0.19303797, 0.1835443 , 0.19303797, 0.19303797, 0.18670886,
         0.1835443 , 0.16455696, 0.17088608, 0.17721519, 0.16772152,
         0.17721519, 0.17405063, 0.16772152, 0.16455696, 0.17721519,
         0.18670886, 0.17088608, 0.18987342]),
  'mean_test_score': array([0.35905742, 0.34072096, 0.33155473, 0.35937887, 0.34293715,
         0.33155872, 0.3577936 , 0.34293415, 0.33092281, 0.26834345,
         0.24968953, 0.23388472, 0.26897436, 0.25158328, 0.23704029,
         0.26802899, 0.25190772, 0.23609292, 0.23261291, 0.20860021,
         0.19753624, 0.23325281, 0.2130226 , 0.19880006, 0.23230543,
         0.2152348 , 0.20322046, 0.20575111, 0.18457753, 0.16782933,
         0.20764984, 0.18931538, 0.17919878, 0.2101735 , 0.1949986 ,
         0.18015114, 0.20890668, 0.19215949, 0.18204289, 0.20668949,
         0.18867947, 0.18204788, 0.21048497, 0.19342331, 0.18236633,
         0.15549455, 0.15202052, 0.15170107, 0.15580701, 0.15043525,
         0.15043725, 0.16118476, 0.15328036, 0.15138262, 0.14570239,
         0.14696921, 0.14254482, 0.15707982, 0.15297189, 0.15076169,
         0.15549355, 0.15612746, 0.15675838, 0.13843489, 0.13306313,
         0.13274867, 0.1485455 , 0.14633131, 0.14475302, 0.15012379,
         0.1479086 , 0.1507537 , 0.16055485, 0.15834165, 0.1551771 ,
         0.16561714, 0.16119075, 0.15991994, 0.16908318, 0.16212814,
         0.15769676, 0.15202052, 0.15676437, 0.15487362, 0.15549854,
         0.15928902, 0.1608703 , 0.15929302, 0.16371241, 0.16624506,
         0.1463363 , 0.1529649 , 0.15486164, 0.15929501, 0.15676237,
         0.15550154, 0.1536018 , 0.15581799, 0.1608693 , 0.1384319 ,
         0.14317873, 0.14412311, 0.14980733, 0.15201953, 0.14886395,
         0.15802619, 0.1551741 , 0.15676037]),
  'std_test_score': array([0.03168421, 0.0271485 , 0.02490095, 0.02924282, 0.02814294,
         0.02530451, 0.02903705, 0.02654345, 0.02559635, 0.02310853,
         0.02162762, 0.02346464, 0.02194598, 0.02215233, 0.01991265,
         0.024478  , 0.0232037 , 0.02246171, 0.01918845, 0.01519651,
         0.01537165, 0.01993256, 0.01511563, 0.01485812, 0.0174992 ,
         0.01374178, 0.01490457, 0.01648503, 0.01810338, 0.01729636,
         0.01662363, 0.01718814, 0.01447658, 0.01570468, 0.01881705,
         0.01459836, 0.01901269, 0.01809576, 0.0176547 , 0.01741634,
         0.01653562, 0.01731637, 0.02000078, 0.01825878, 0.01833677,
         0.01247567, 0.01156461, 0.01592835, 0.01410199, 0.019476  ,
         0.02073516, 0.01386581, 0.01605813, 0.02058301, 0.01291717,
         0.01330976, 0.01494553, 0.01736463, 0.01607564, 0.02001285,
         0.01796385, 0.01833304, 0.01959473, 0.01233264, 0.01362431,
         0.01500595, 0.01957243, 0.01676107, 0.01905312, 0.013905  ,
         0.01696306, 0.0192916 , 0.01856006, 0.01782095, 0.02145189,
         0.01472645, 0.02202292, 0.02194303, 0.01536687, 0.0224912 ,
         0.02312544, 0.01891622, 0.01756107, 0.01611961, 0.01238538,
         0.01487741, 0.01668462, 0.01612372, 0.01869207, 0.01804501,
         0.01962513, 0.01813217, 0.01991135, 0.01694564, 0.0160541 ,
         0.01296855, 0.01599736, 0.0135878 , 0.01131184, 0.01513893,
         0.01616394, 0.01471824, 0.01354077, 0.01441539, 0.01628808,
         0.01585883, 0.01629462, 0.01695418]),
  'rank_test_score': array([  2,   6,   8,   1,   4,   7,   3,   5,   9,  11,  15,  18,  10,
          14,  16,  12,  13,  17,  20,  27,  33,  19,  23,  32,  21,  22,
          31,  30,  39,  46,  28,  37,  44,  25,  34,  43,  26,  36,  42,
          29,  38,  41,  24,  35,  40,  73,  83,  86,  70,  91,  90,  52,
          80,  87, 100,  97, 104,  63,  81,  88,  74,  68,  67, 105, 107,
         108,  95,  99, 101,  92,  96,  89,  55,  60,  75,  48,  51,  56,
          45,  50,  62,  83,  64,  77,  72,  59,  53,  58,  49,  47,  98,
          82,  78,  57,  65,  71,  79,  69,  54, 106, 103, 102,  93,  85,
          94,  61,  76,  66])},
 {'XGBoost__learning_rate': 0.01,
  'XGBoost__max_depth': 3,
  'XGBoost__min_child_weight': 3,
  'XGBoost__n_estimators': 400},
 0.35937886834644417)
In [ ]:
 

The first step of grid search results in a better learning rate of 0.01, max depth of 3, min child weight of 1, and n estimators of 400

Step2:The vaule of Gamma (Regularized hyperparameters)¶

In [318]:
xgb2 = XGBClassifier(objective='binary:logistic',    # Binary classification task
    eval_metric='auc',              # AUC is a robust evaluation metric for imbalanced data
    use_label_encoder=False,        # Advoid warnings (required for older versions)
    
    n_estimators=400,               # Number of trees,Select a moderate initial value, I selected a moderate initial value
    learning_rate=0.01,             # Connservative learning rate to avoid overfitting
    max_depth=3,                    # Maximum depth of each tree to control complexity
    min_child_weight=1,             # Minimum sum of instance weight (hessian) in a child
    gamma=0.1,                      # Minimum loss reduction required to make a further partition
    subsample=0.8,                  # Subsample ratio of the training instances for each tree
    colsample_bytree=0.8,          # Subsample ratio of columns when constructing each tree, using 0.8 as initial value
    n_jobs = -1,                    # Use all available cores for parallel processing
    reg_alpha=0.01,                # L1 regularization to encourage sparsity
    reg_lambda=1.0,
    seed=42)        # random seed
# param_grid2 = {'XGBoost__gamma': [0.0, 0.1, 0.2, 0.3, 0.4]}  # There was nothing change, so I would like to try more large gamma.
param_grid2 = {'XGBoost__gamma': [0.4,0.5]}  # There was nothing change, so I would like to try more large gamma.
In [319]:
ppl2 = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
              SimpleImputer(strategy='median')), 
         # It has been operated before. Theoretically, the missing fill is encapsulated hereStandardScaler(), num_features),
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('smote', SMOTE(random_state=42)),
       # 4. Modelling
    ('XGBoost', xgb2)
])
In [320]:
clf2 = GridSearchCV(
    estimator=ppl2,
    param_grid=param_grid2,scoring='recall', verbose=2, cv=cv, n_jobs=-1)
clf2.fit(X_train_agg, y_train_agg)
Fitting 10 folds for each of 2 candidates, totalling 20 fits
Out[320]:
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      learning_rate=0.01,
                                                      max_bin=None,
                                                      max_cat_threshold=None,
                                                      max_cat_to_onehot=None,
                                                      max_delta_step=None,
                                                      max_depth=3,
                                                      max_leaves=None,
                                                      min_child_weight=1,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=400,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             n_jobs=-1, param_grid={'XGBoost__gamma': [0.4, 0.5]},
             scoring='recall', verbose=2)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      learning_rate=0.01,
                                                      max_bin=None,
                                                      max_cat_threshold=None,
                                                      max_cat_to_onehot=None,
                                                      max_delta_step=None,
                                                      max_depth=3,
                                                      max_leaves=None,
                                                      min_child_weight=1,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=400,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             n_jobs=-1, param_grid={'XGBoost__gamma': [0.4, 0.5]},
             scoring='recall', verbose=2)
Pipeline(steps=[('drop_id', DropFeatures(columns=['Patient_id'])),
                ('drop_duplicates', DropDuplicateFeatures()),
                ('cleaning',
                 ColumnTransformer(transformers=[('num',
                                                  Pipeline(steps=[('standardscaler',
                                                                   StandardScaler()),
                                                                  ('simpleimputer',
                                                                   SimpleImputer(strategy='median'))]),
                                                  <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C4128700>)...
                               feature_types=None, gamma=0.1, grow_policy=None,
                               importance_type=None,
                               interaction_constraints=None, learning_rate=0.01,
                               max_bin=None, max_cat_threshold=None,
                               max_cat_to_onehot=None, max_delta_step=None,
                               max_depth=3, max_leaves=None, min_child_weight=1,
                               missing=nan, monotone_constraints=None,
                               multi_strategy=None, n_estimators=400, n_jobs=-1,
                               num_parallel_tree=None, random_state=None, ...))])
DropFeatures(columns=['Patient_id'])
DropDuplicateFeatures()
ColumnTransformer(transformers=[('num',
                                 Pipeline(steps=[('standardscaler',
                                                  StandardScaler()),
                                                 ('simpleimputer',
                                                  SimpleImputer(strategy='median'))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C4128700>),
                                ('cat',
                                 Pipeline(steps=[('onehotencoder',
                                                  OneHotEncoder(handle_unknown='ignore',
                                                                sparse=False))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285C4128D30>)])
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285C4128700>
StandardScaler()
SimpleImputer(strategy='median')
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285C4128D30>
OneHotEncoder(handle_unknown='ignore', sparse=False)
SMOTE(random_state=42)
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.1, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.01, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=3, max_leaves=None,
              min_child_weight=1, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=400, n_jobs=-1,
              num_parallel_tree=None, random_state=None, ...)
In [321]:
clf2.cv_results_, clf2.best_params_, clf2.best_score_
Out[321]:
({'mean_fit_time': array([16.70156045, 15.399824  ]),
  'std_fit_time': array([0.96172706, 4.08716992]),
  'mean_score_time': array([0.09935548, 0.07554324]),
  'std_score_time': array([0.02513512, 0.02673715]),
  'param_XGBoost__gamma': masked_array(data=[0.4, 0.5],
               mask=[False, False],
         fill_value='?',
              dtype=object),
  'params': [{'XGBoost__gamma': 0.4}, {'XGBoost__gamma': 0.5}],
  'split0_test_score': array([0.38291139, 0.38924051]),
  'split1_test_score': array([0.41139241, 0.41139241]),
  'split2_test_score': array([0.30914826, 0.30283912]),
  'split3_test_score': array([0.32492114, 0.32492114]),
  'split4_test_score': array([0.34493671, 0.34493671]),
  'split5_test_score': array([0.35126582, 0.35443038]),
  'split6_test_score': array([0.30063291, 0.30063291]),
  'split7_test_score': array([0.37539432, 0.37539432]),
  'split8_test_score': array([0.34384858, 0.34384858]),
  'split9_test_score': array([0.37341772, 0.37341772]),
  'mean_test_score': array([0.35178693, 0.35210538]),
  'std_test_score': array([0.03288392, 0.03439636]),
  'rank_test_score': array([2, 1])},
 {'XGBoost__gamma': 0.5},
 0.3521053787485525)

The score has not improved significantly. I choose to maintain gamma at 0.4.

Step3 : subsample, colsample_bytree, reg_alpha / reg_lambda¶

I established the main structure of the model. To save computing resources and avoid code redundancy, I chose to look for subsamples, colsample_bytree(to control the sampling rate of samples and features and reduce overfitting), and reg_alpha/reg_lambda (regularization terms to prevent complex overfitting of the model) simultaneously.

In [322]:
xgb3 = XGBClassifier(
    eval_metric='auc',              # AUC is a robust evaluation metric for imbalanced data
    use_label_encoder=False,        # Advoid warnings (required for older versions)
    
    n_estimators=400,               # Number of trees,Select a moderate initial value, I selected a moderate initial value
    learning_rate=0.01,             # Connservative learning rate to avoid overfitting
    max_depth=3,                    # Maximum depth of each tree to control complexity
    min_child_weight=1,             # Minimum sum of instance weight (hessian) in a child
    gamma=0.5,                      # Minimum loss reduction required to make a further partition
    subsample=0.8,                  # Subsample ratio of the training instances for each tree
    colsample_bytree=0.8,          # Subsample ratio of columns when constructing each tree, using 0.8 as initial value
    n_jobs = -1,                    # Use all available cores for parallel processing
    reg_alpha=0.01,                # L1 regularization to encourage sparsity
    reg_lambda=1.0,
    seed=42)        # random seed
#param_grid3 = {
#    
#    'XGBoost__subsample': [0.6, 0.8, 1.0],
#    'XGBoost__colsample_bytree': [0.6, 0.8, 1.0],
#    'XGBoost__reg_alpha': [0, 0.01, 0.1],
#    'XGBoost__reg_lambda': [1, 1.5, 2]}

param_grid3 = {
    
   'XGBoost__subsample': [0.75, 0.8, 0.85],
   'XGBoost__colsample_bytree': [0.6, 0.8, 1.0],
  'XGBoost__reg_alpha': [1e-5, 0, 0.01, 0.1,100],
    'XGBoost__reg_lambda': [0,0.5,1, 1.5]}
In [323]:
ppl3 = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
              SimpleImputer(strategy='median')  ),
         # It has been operated before. Theoretically, the missing fill is encapsulated hereStandardScaler(), num_features),
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('smote', SMOTE(random_state=42)),
       # 4. Modelling
    ('XGBoost', xgb3)
])
In [324]:
clf3 = GridSearchCV(
    estimator=ppl3,
    param_grid=param_grid3,scoring='recall', verbose=2, cv=cv, n_jobs=-1)
clf3.fit(X_train_agg, y_train_agg)
Fitting 10 folds for each of 180 candidates, totalling 1800 fits
Out[324]:
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      min_child_weight=1,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=400,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             n_jobs=-1,
             param_grid={'XGBoost__colsample_bytree': [0.6, 0.8, 1.0],
                         'XGBoost__reg_alpha': [1e-05, 0, 0.01, 0.1, 100],
                         'XGBoost__reg_lambda': [0, 0.5, 1, 1.5],
                         'XGBoost__subsample': [0.75, 0.8, 0.85]},
             scoring='recall', verbose=2)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=2, n_splits=5, random_state=42),
             estimator=Pipeline(steps=[('drop_id',
                                        DropFeatures(columns=['Patient_id'])),
                                       ('drop_duplicates',
                                        DropDuplicateFeatures()),
                                       ('cleaning',
                                        ColumnTransformer(transformers=[('num',
                                                                         Pipeline(steps=[('standardscaler',
                                                                                          StandardScaler()),
                                                                                         ('simpleimputer',
                                                                                          SimpleImputer(strategy='median'...
                                                      min_child_weight=1,
                                                      missing=nan,
                                                      monotone_constraints=None,
                                                      multi_strategy=None,
                                                      n_estimators=400,
                                                      n_jobs=-1,
                                                      num_parallel_tree=None,
                                                      random_state=None, ...))]),
             n_jobs=-1,
             param_grid={'XGBoost__colsample_bytree': [0.6, 0.8, 1.0],
                         'XGBoost__reg_alpha': [1e-05, 0, 0.01, 0.1, 100],
                         'XGBoost__reg_lambda': [0, 0.5, 1, 1.5],
                         'XGBoost__subsample': [0.75, 0.8, 0.85]},
             scoring='recall', verbose=2)
Pipeline(steps=[('drop_id', DropFeatures(columns=['Patient_id'])),
                ('drop_duplicates', DropDuplicateFeatures()),
                ('cleaning',
                 ColumnTransformer(transformers=[('num',
                                                  Pipeline(steps=[('standardscaler',
                                                                   StandardScaler()),
                                                                  ('simpleimputer',
                                                                   SimpleImputer(strategy='median'))]),
                                                  <sklearn.compose._column_transformer.make_column_selector object at 0x00000285BCF56070>)...
                               feature_types=None, gamma=0.5, grow_policy=None,
                               importance_type=None,
                               interaction_constraints=None, learning_rate=0.01,
                               max_bin=None, max_cat_threshold=None,
                               max_cat_to_onehot=None, max_delta_step=None,
                               max_depth=3, max_leaves=None, min_child_weight=1,
                               missing=nan, monotone_constraints=None,
                               multi_strategy=None, n_estimators=400, n_jobs=-1,
                               num_parallel_tree=None, random_state=None, ...))])
DropFeatures(columns=['Patient_id'])
DropDuplicateFeatures()
ColumnTransformer(transformers=[('num',
                                 Pipeline(steps=[('standardscaler',
                                                  StandardScaler()),
                                                 ('simpleimputer',
                                                  SimpleImputer(strategy='median'))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285BCF56070>),
                                ('cat',
                                 Pipeline(steps=[('onehotencoder',
                                                  OneHotEncoder(handle_unknown='ignore',
                                                                sparse=False))]),
                                 <sklearn.compose._column_transformer.make_column_selector object at 0x00000285BCF56640>)])
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285BCF56070>
StandardScaler()
SimpleImputer(strategy='median')
<sklearn.compose._column_transformer.make_column_selector object at 0x00000285BCF56640>
OneHotEncoder(handle_unknown='ignore', sparse=False)
SMOTE(random_state=42)
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.5, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.01, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=3, max_leaves=None,
              min_child_weight=1, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=400, n_jobs=-1,
              num_parallel_tree=None, random_state=None, ...)
In [325]:
clf3.cv_results_, clf3.best_params_, clf3.best_score_
Out[325]:
({'mean_fit_time': array([15.3652149 , 15.77477014, 15.57040009, 15.54844818, 15.42107937,
         15.67900403, 15.3775594 , 15.86768432, 16.64043839, 16.44617219,
         16.95348241, 16.66141882, 17.47504725, 17.54558394, 17.00298262,
         16.87741876, 16.61400757, 16.39664395, 16.64599435, 17.27032216,
         16.48689299, 16.53158722, 16.60797899, 16.50985973, 17.25469129,
         16.66137896, 16.43411384, 16.83564503, 17.04056947, 18.29574451,
         17.8024662 , 16.44328365, 16.61850212, 16.75981109, 16.45249538,
         16.64182494, 16.13480871, 16.78472424, 16.65762911, 16.36060395,
         16.32993293, 16.55217803, 17.36604354, 16.32708359, 16.49560893,
         16.5481962 , 17.00617635, 17.1648576 , 16.71474259, 16.83164914,
         17.53320739, 17.4488481 , 16.63311524, 16.81858866, 17.07613499,
         17.02526724, 16.99806712, 16.56584456, 16.77121947, 16.30491626,
         16.92166691, 16.68610251, 16.50529165, 16.77253091, 16.59702067,
         16.43610859, 17.18647323, 16.99938416, 17.11270144, 16.6545994 ,
         16.61702876, 16.66009538, 17.05163028, 16.40854688, 16.12807686,
         16.72310162, 16.41504166, 16.83358786, 16.77907178, 16.69787722,
         16.40163248, 16.66973734, 16.92288342, 16.67985158, 16.72243016,
         16.89147913, 17.23121667, 16.4820822 , 16.68234019, 16.87691107,
         16.75797651, 16.94385905, 16.8417726 , 16.65287986, 16.60936046,
         16.62677319, 16.70147901, 16.9802798 , 17.03689713, 16.49146476,
         16.91739891, 16.79389753, 16.71171796, 16.97095907, 16.9188405 ,
         16.75683718, 16.72486861, 16.65862129, 16.72131462, 16.87425063,
         16.91404228, 17.1458987 , 16.84403043, 16.69104879, 16.73158658,
         16.92438631, 16.80425816, 16.98969874, 17.22754269, 17.10706081,
         17.39315939, 16.91173143, 16.69283283, 17.19599359, 16.54451647,
         16.39028959, 16.98571327, 16.80281487, 16.685114  , 17.23066084,
         16.92754531, 17.25858684, 17.05248351, 17.27613697, 16.862587  ,
         17.25397799, 16.61164527, 17.08114016, 17.25577338, 16.89015913,
         17.30753431, 17.69031894, 17.30175431, 17.13554747, 17.49980333,
         17.03392727, 16.70315976, 17.17545598, 16.99643433, 17.26127577,
         16.86892896, 17.13675494, 17.23721926, 16.81537929, 17.10239279,
         16.66935525, 16.92577672, 16.66133759, 16.34662771, 16.16481764,
         16.61690133, 16.748896  , 16.67686191, 16.76128182, 17.16456766,
         16.2850764 , 16.84472599, 16.28581343, 16.70427396, 16.61944211,
         17.40776873, 16.90870357, 17.04226847, 17.05831313, 17.18589158,
         16.70964956, 16.86272321, 16.45954597, 16.85171816, 12.80437732]),
  'std_fit_time': array([0.71337849, 0.74567497, 0.75035506, 0.82496722, 0.96102591,
         0.37136324, 0.35372735, 0.65090274, 0.41086895, 0.400447  ,
         0.4529368 , 0.51764459, 0.2971643 , 0.90288311, 1.13180458,
         0.57128958, 0.50971103, 0.70242231, 0.68039413, 0.4807427 ,
         0.7303843 , 0.59683528, 0.33357219, 0.34588885, 0.68288728,
         0.59086469, 0.38966776, 0.34646152, 0.56307304, 0.49615153,
         1.20387648, 0.54180016, 0.57248382, 0.60814281, 0.55687987,
         0.43796157, 0.34912377, 0.49625911, 0.49263392, 0.45002759,
         0.45608154, 0.59249196, 0.49572305, 0.77739465, 0.79920879,
         0.53177748, 0.51439904, 0.44409166, 0.50384767, 0.62069005,
         0.52090128, 0.94702866, 0.53650501, 0.78742434, 0.6775909 ,
         1.27763711, 2.06579259, 1.43373261, 0.53767023, 0.64262054,
         0.60395043, 0.52091717, 0.30871281, 0.52181445, 0.50350169,
         0.63851602, 0.65655185, 0.45548502, 0.34957602, 0.72009497,
         0.38554799, 0.68409855, 0.43314497, 0.35500376, 0.37693538,
         0.41363906, 0.67995221, 0.61137679, 0.37279171, 0.43569571,
         1.07041071, 1.49303484, 0.9061375 , 0.59526402, 0.54043673,
         0.43135141, 1.98825084, 2.15250302, 1.03968791, 0.56381946,
         0.85388694, 0.64125914, 0.65823547, 0.57349647, 0.40607867,
         0.89480129, 0.68282078, 0.88591164, 0.62423254, 0.78804802,
         0.86963707, 0.4353118 , 0.53125427, 0.56708943, 0.52696812,
         0.51229848, 0.6782732 , 0.71124043, 0.8500097 , 0.44706678,
         0.67544012, 0.54757664, 0.69580268, 0.46956573, 0.33013637,
         0.51095374, 0.89547232, 0.39912797, 0.58892132, 0.51878181,
         0.67820081, 0.5560862 , 0.6710767 , 0.54576512, 0.73053127,
         0.51252182, 0.59098294, 0.71316951, 0.5896065 , 0.6707826 ,
         0.80628513, 0.34375617, 0.50867527, 0.62677574, 0.60658912,
         0.96985526, 0.79092943, 0.47097979, 0.62017086, 0.60258194,
         0.56752587, 0.55980788, 0.7374811 , 0.68677728, 0.69642822,
         0.71035096, 0.53368202, 0.76711488, 0.37959213, 0.52933264,
         0.7704046 , 0.63501219, 0.65310925, 0.38935344, 0.61023714,
         0.64901076, 0.60365218, 0.54640851, 0.47758132, 0.38200433,
         0.41892159, 0.54962665, 0.58037895, 0.44462822, 0.81271794,
         1.51875814, 1.08185259, 0.46335161, 0.5275188 , 0.43625878,
         0.61387894, 0.59277328, 0.80177553, 0.75792176, 0.63617204,
         0.43883214, 0.54310898, 0.3387752 , 0.48530812, 1.59845002]),
  'mean_score_time': array([0.13882005, 0.16534286, 0.14784989, 0.1393158 , 0.1622664 ,
         0.14269817, 0.16298301, 0.15099325, 0.15372055, 0.15796552,
         0.15330293, 0.16467083, 0.1461638 , 0.12727928, 0.14430211,
         0.13468564, 0.15180542, 0.15545671, 0.15068889, 0.17456679,
         0.15497017, 0.15337553, 0.14930923, 0.16581783, 0.15660532,
         0.15701604, 0.1388458 , 0.12372096, 0.14262455, 0.16746352,
         0.15923517, 0.13369656, 0.16635284, 0.16771924, 0.15683165,
         0.16610439, 0.14983454, 0.15423884, 0.14105361, 0.13924038,
         0.14717727, 0.12676673, 0.17062414, 0.16088774, 0.14687865,
         0.14062164, 0.11714835, 0.15708451, 0.13269887, 0.16000147,
         0.16195641, 0.160743  , 0.1386559 , 0.15981195, 0.12593801,
         0.15009816, 0.13912263, 0.13451252, 0.15595655, 0.14117904,
         0.15249143, 0.15174847, 0.14436212, 0.1416698 , 0.13996453,
         0.14242401, 0.13807588, 0.135022  , 0.1481756 , 0.16629057,
         0.1617471 , 0.13625231, 0.14459143, 0.14587724, 0.14190495,
         0.15145605, 0.15805051, 0.15296299, 0.15530136, 0.14612646,
         0.17364345, 0.14015818, 0.13427899, 0.17006555, 0.1551743 ,
         0.15530977, 0.16105828, 0.16481838, 0.16761653, 0.14113998,
         0.14614394, 0.15935957, 0.17164607, 0.16393437, 0.15553324,
         0.15993648, 0.14738111, 0.13760962, 0.13093626, 0.15706327,
         0.1237515 , 0.14364374, 0.10669909, 0.14336712, 0.1579263 ,
         0.1622601 , 0.13851328, 0.15728354, 0.18381269, 0.12169251,
         0.16058915, 0.14044423, 0.15325999, 0.14095356, 0.16633017,
         0.16805453, 0.14154894, 0.15358241, 0.16070695, 0.13826919,
         0.14383287, 0.13376753, 0.1583632 , 0.16105285, 0.13486745,
         0.16780221, 0.1733772 , 0.14950659, 0.17286479, 0.15582094,
         0.1690423 , 0.16834147, 0.15652635, 0.13323865, 0.14760585,
         0.13536165, 0.17936757, 0.14869654, 0.15474391, 0.14298575,
         0.16516855, 0.16406553, 0.16631649, 0.14085715, 0.15758977,
         0.17263501, 0.18029566, 0.15366642, 0.16117921, 0.12911873,
         0.17565532, 0.16544924, 0.15217462, 0.15076849, 0.14707808,
         0.15525873, 0.1645175 , 0.15612712, 0.14899089, 0.16149549,
         0.1379544 , 0.16028807, 0.15734267, 0.15721583, 0.16855729,
         0.13466544, 0.14200695, 0.13255506, 0.16036043, 0.14149222,
         0.12799473, 0.17028096, 0.16298153, 0.11624706, 0.14935944,
         0.14808254, 0.16604207, 0.14590859, 0.14010415, 0.06035948]),
  'std_score_time': array([0.04242232, 0.02877001, 0.04841475, 0.01965108, 0.02849874,
         0.03828859, 0.01765224, 0.04093063, 0.03806694, 0.04391004,
         0.02671242, 0.0424214 , 0.06451491, 0.0441678 , 0.0374861 ,
         0.03948595, 0.04218602, 0.03832432, 0.04487399, 0.03111279,
         0.03588576, 0.05392607, 0.02149089, 0.04269839, 0.03536105,
         0.04097866, 0.02781715, 0.04563893, 0.03693767, 0.0464262 ,
         0.05188102, 0.05021999, 0.02784371, 0.02990676, 0.04026037,
         0.02814889, 0.0448013 , 0.03475176, 0.03508976, 0.0425032 ,
         0.04081158, 0.03398212, 0.03998358, 0.01815677, 0.04518438,
         0.03325898, 0.03494802, 0.02001505, 0.04820618, 0.03799064,
         0.05195262, 0.03921549, 0.05174765, 0.04437356, 0.04278628,
         0.05145886, 0.04502501, 0.04364536, 0.03001793, 0.04738514,
         0.03773064, 0.02633455, 0.04102746, 0.05079516, 0.03928387,
         0.04834328, 0.03554538, 0.05487516, 0.04949078, 0.03670888,
         0.02961351, 0.04185675, 0.03843914, 0.03642227, 0.04507778,
         0.0369921 , 0.03504576, 0.04372007, 0.02744446, 0.05703858,
         0.04877086, 0.04169638, 0.05077772, 0.02451903, 0.03635388,
         0.05147242, 0.0434418 , 0.04897628, 0.02719398, 0.04705963,
         0.05293776, 0.03247389, 0.03145228, 0.03585431, 0.03985149,
         0.0416221 , 0.04977682, 0.04053272, 0.03967055, 0.02984093,
         0.05171629, 0.02739444, 0.04752284, 0.04953826, 0.04091522,
         0.03002918, 0.04583827, 0.03835191, 0.02505861, 0.04341155,
         0.03873827, 0.03410862, 0.03603615, 0.04585766, 0.02881101,
         0.02431976, 0.04992943, 0.04140882, 0.0487381 , 0.05178832,
         0.04003774, 0.04203496, 0.03473009, 0.02564311, 0.03692305,
         0.05124669, 0.04187277, 0.045212  , 0.0350227 , 0.03606139,
         0.05685032, 0.0580362 , 0.03941986, 0.0413646 , 0.03872546,
         0.03931757, 0.04404122, 0.03677596, 0.0274741 , 0.03924367,
         0.03853587, 0.03654359, 0.05374155, 0.04255712, 0.05032911,
         0.0282156 , 0.03306567, 0.02583023, 0.04317228, 0.04591355,
         0.02027365, 0.04073395, 0.03914277, 0.03860189, 0.03302795,
         0.03900828, 0.04001864, 0.03969454, 0.0354086 , 0.03342941,
         0.04210338, 0.04182889, 0.0263444 , 0.03638638, 0.04143882,
         0.03916993, 0.0231489 , 0.04799939, 0.04529127, 0.04665166,
         0.04297528, 0.04350422, 0.03950126, 0.04078699, 0.02949773,
         0.03490412, 0.02741934, 0.04908102, 0.04304651, 0.00985545]),
  'param_XGBoost__colsample_bytree': masked_array(data=[0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
                     0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
                     0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
                     0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
                     0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
                     0.6, 0.6, 0.6, 0.6, 0.6, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
                     0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
                     0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
                     0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
                     0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
                     0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1.0,
                     1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
                     1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
                     1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
                     1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
                     1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
                     1.0, 1.0, 1.0, 1.0],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__reg_alpha': masked_array(data=[1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05,
                     1e-05, 1e-05, 1e-05, 1e-05, 0, 0, 0, 0, 0, 0, 0, 0, 0,
                     0, 0, 0, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 100, 100, 100, 100,
                     100, 100, 100, 100, 100, 100, 100, 100, 1e-05, 1e-05,
                     1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05,
                     1e-05, 1e-05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 0.1, 0.1, 0.1, 100, 100, 100, 100, 100, 100, 100,
                     100, 100, 100, 100, 100, 1e-05, 1e-05, 1e-05, 1e-05,
                     1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05, 1e-05,
                     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0.01, 0.01,
                     0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01,
                     0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
                     0.1, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100,
                     100, 100],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__reg_lambda': masked_array(data=[0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0,
                     0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5,
                     0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5,
                     0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1,
                     1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1,
                     1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5,
                     1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5,
                     1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0,
                     0, 0, 0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0,
                     0.5, 0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5,
                     0.5, 0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5,
                     0.5, 1, 1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1,
                     1, 1, 1.5, 1.5, 1.5, 0, 0, 0, 0.5, 0.5, 0.5, 1, 1, 1,
                     1.5, 1.5, 1.5],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'param_XGBoost__subsample': masked_array(data=[0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85,
                     0.75, 0.8, 0.85, 0.75, 0.8, 0.85, 0.75, 0.8, 0.85],
               mask=[False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False, False, False, False, False,
                     False, False, False, False],
         fill_value='?',
              dtype=object),
  'params': [{'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.6,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 0.8,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 1e-05,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.01,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 0.1,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 0.5,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1,
    'XGBoost__subsample': 0.85},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.75},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.8},
   {'XGBoost__colsample_bytree': 1.0,
    'XGBoost__reg_alpha': 100,
    'XGBoost__reg_lambda': 1.5,
    'XGBoost__subsample': 0.85}],
  'split0_test_score': array([0.37341772, 0.37341772, 0.37974684, 0.38291139, 0.37025316,
         0.37341772, 0.38291139, 0.37341772, 0.37341772, 0.39240506,
         0.37025316, 0.37341772, 0.37341772, 0.37341772, 0.37974684,
         0.38291139, 0.37025316, 0.37341772, 0.38291139, 0.37341772,
         0.37341772, 0.39240506, 0.37025316, 0.37341772, 0.38291139,
         0.37341772, 0.38607595, 0.38291139, 0.37025316, 0.37658228,
         0.38291139, 0.37341772, 0.37658228, 0.38924051, 0.37025316,
         0.37341772, 0.37974684, 0.36392405, 0.37658228, 0.38291139,
         0.37341772, 0.37658228, 0.38291139, 0.37341772, 0.37341772,
         0.38924051, 0.37025316, 0.37341772, 0.43987342, 0.4335443 ,
         0.43670886, 0.43670886, 0.4335443 , 0.43987342, 0.43670886,
         0.4335443 , 0.43987342, 0.44303797, 0.4335443 , 0.43987342,
         0.39556962, 0.38291139, 0.37974684, 0.38291139, 0.38291139,
         0.38607595, 0.38607595, 0.38924051, 0.38607595, 0.38924051,
         0.38607595, 0.38607595, 0.39556962, 0.38291139, 0.37974684,
         0.38291139, 0.38291139, 0.38607595, 0.38607595, 0.38924051,
         0.38607595, 0.38924051, 0.38607595, 0.38607595, 0.39556962,
         0.38291139, 0.38291139, 0.38291139, 0.37974684, 0.38607595,
         0.38607595, 0.38924051, 0.38607595, 0.38924051, 0.37974684,
         0.38607595, 0.38607595, 0.37974684, 0.38291139, 0.38924051,
         0.37658228, 0.38291139, 0.38291139, 0.38291139, 0.39556962,
         0.38924051, 0.38607595, 0.39240506, 0.49050633, 0.49367089,
         0.48734177, 0.49367089, 0.49050633, 0.47468354, 0.49367089,
         0.49050633, 0.48734177, 0.49367089, 0.49050633, 0.48101266,
         0.37341772, 0.38924051, 0.39240506, 0.38291139, 0.39556962,
         0.39873418, 0.38291139, 0.39556962, 0.39240506, 0.38924051,
         0.38924051, 0.38924051, 0.37341772, 0.38924051, 0.39240506,
         0.38291139, 0.39556962, 0.39873418, 0.38291139, 0.39556962,
         0.39240506, 0.38924051, 0.38924051, 0.38924051, 0.37341772,
         0.38924051, 0.38607595, 0.38924051, 0.39556962, 0.39873418,
         0.37974684, 0.39556962, 0.39240506, 0.38924051, 0.39556962,
         0.38924051, 0.37025316, 0.38924051, 0.39240506, 0.38291139,
         0.39556962, 0.39240506, 0.38291139, 0.39240506, 0.39556962,
         0.38291139, 0.39556962, 0.38924051, 0.50632911, 0.50632911,
         0.5       , 0.50632911, 0.50632911, 0.5       , 0.50632911,
         0.50632911, 0.50316456, 0.50316456, 0.50316456, 0.50632911]),
  'split1_test_score': array([0.39873418, 0.41455696, 0.40822785, 0.39873418, 0.40822785,
         0.41455696, 0.39873418, 0.40822785, 0.39556962, 0.40189873,
         0.40822785, 0.40189873, 0.39873418, 0.41455696, 0.40822785,
         0.39873418, 0.40822785, 0.41455696, 0.39873418, 0.40822785,
         0.39556962, 0.40189873, 0.40822785, 0.40189873, 0.39873418,
         0.41455696, 0.40822785, 0.39873418, 0.40822785, 0.41455696,
         0.39873418, 0.41139241, 0.39556962, 0.40189873, 0.40822785,
         0.40189873, 0.39873418, 0.40822785, 0.39556962, 0.40189873,
         0.41772152, 0.39873418, 0.40822785, 0.41772152, 0.40822785,
         0.39873418, 0.41139241, 0.40506329, 0.4556962 , 0.47468354,
         0.46202532, 0.4556962 , 0.47151899, 0.46835443, 0.4556962 ,
         0.47468354, 0.46518987, 0.4556962 , 0.47151899, 0.46518987,
         0.40822785, 0.40506329, 0.40506329, 0.40506329, 0.40189873,
         0.40822785, 0.40506329, 0.41139241, 0.40822785, 0.41139241,
         0.41455696, 0.41455696, 0.40822785, 0.40506329, 0.40506329,
         0.40506329, 0.40189873, 0.40822785, 0.40506329, 0.41139241,
         0.40822785, 0.41139241, 0.41455696, 0.41455696, 0.39873418,
         0.40506329, 0.40506329, 0.40506329, 0.40822785, 0.40822785,
         0.40506329, 0.41139241, 0.40822785, 0.41139241, 0.41455696,
         0.41455696, 0.39873418, 0.40189873, 0.40506329, 0.40506329,
         0.40822785, 0.40822785, 0.41139241, 0.40506329, 0.41139241,
         0.41139241, 0.41455696, 0.41139241, 0.49367089, 0.48734177,
         0.49683544, 0.49367089, 0.48734177, 0.49367089, 0.5       ,
         0.49367089, 0.5       , 0.49050633, 0.5       , 0.5       ,
         0.41139241, 0.39873418, 0.41139241, 0.41139241, 0.40189873,
         0.40506329, 0.40506329, 0.41139241, 0.41772152, 0.42088608,
         0.39873418, 0.41139241, 0.41139241, 0.39873418, 0.41139241,
         0.41139241, 0.40189873, 0.40506329, 0.40506329, 0.41139241,
         0.41772152, 0.42088608, 0.39873418, 0.41139241, 0.41139241,
         0.40189873, 0.41139241, 0.41139241, 0.40189873, 0.40822785,
         0.40506329, 0.39873418, 0.41772152, 0.42088608, 0.39873418,
         0.41139241, 0.41139241, 0.39556962, 0.41139241, 0.41139241,
         0.40822785, 0.40189873, 0.41772152, 0.39873418, 0.41139241,
         0.41455696, 0.40189873, 0.40822785, 0.5       , 0.5       ,
         0.49683544, 0.50316456, 0.5       , 0.49683544, 0.51265823,
         0.5       , 0.49683544, 0.50316456, 0.50316456, 0.5       ]),
  'split2_test_score': array([0.31230284, 0.31230284, 0.30599369, 0.30914826, 0.30283912,
         0.30914826, 0.30914826, 0.30599369, 0.30914826, 0.31230284,
         0.29968454, 0.30914826, 0.31230284, 0.31230284, 0.30599369,
         0.30914826, 0.30283912, 0.30914826, 0.30914826, 0.30599369,
         0.30914826, 0.31230284, 0.29968454, 0.30914826, 0.31230284,
         0.30599369, 0.30914826, 0.30914826, 0.30599369, 0.30914826,
         0.30914826, 0.30599369, 0.30914826, 0.30599369, 0.29968454,
         0.30914826, 0.31545741, 0.30599369, 0.30599369, 0.30914826,
         0.30283912, 0.31230284, 0.31545741, 0.29968454, 0.30914826,
         0.30914826, 0.30599369, 0.30914826, 0.40378549, 0.39116719,
         0.38801262, 0.40063091, 0.39432177, 0.39116719, 0.40063091,
         0.38801262, 0.38485804, 0.40378549, 0.39116719, 0.38485804,
         0.31545741, 0.30914826, 0.29968454, 0.31545741, 0.29652997,
         0.29968454, 0.31861199, 0.30914826, 0.30599369, 0.31861199,
         0.30914826, 0.31230284, 0.31545741, 0.30914826, 0.29968454,
         0.31545741, 0.29652997, 0.29968454, 0.31861199, 0.30914826,
         0.30599369, 0.31861199, 0.30914826, 0.31230284, 0.31545741,
         0.30283912, 0.30599369, 0.31545741, 0.29652997, 0.29968454,
         0.31861199, 0.30283912, 0.30599369, 0.31861199, 0.30914826,
         0.30914826, 0.31861199, 0.31230284, 0.31230284, 0.31861199,
         0.30599369, 0.29652997, 0.31861199, 0.31230284, 0.30599369,
         0.31545741, 0.30914826, 0.30914826, 0.43533123, 0.42271293,
         0.41324921, 0.43533123, 0.42271293, 0.40378549, 0.43217666,
         0.42271293, 0.40694006, 0.43533123, 0.42586751, 0.40694006,
         0.31230284, 0.30914826, 0.31861199, 0.31861199, 0.30599369,
         0.31545741, 0.32176656, 0.30914826, 0.31545741, 0.32807571,
         0.30283912, 0.31861199, 0.31230284, 0.30914826, 0.31861199,
         0.31861199, 0.30599369, 0.31545741, 0.32176656, 0.30914826,
         0.31545741, 0.32807571, 0.30283912, 0.31861199, 0.31230284,
         0.30914826, 0.31545741, 0.31861199, 0.30914826, 0.31545741,
         0.32176656, 0.30914826, 0.31545741, 0.32807571, 0.30914826,
         0.31861199, 0.31861199, 0.30599369, 0.31545741, 0.31230284,
         0.31230284, 0.31861199, 0.32492114, 0.30599369, 0.31545741,
         0.32176656, 0.30599369, 0.31861199, 0.45425868, 0.44164038,
         0.43217666, 0.45741325, 0.4511041 , 0.42902208, 0.45425868,
         0.44794953, 0.42902208, 0.45425868, 0.4511041 , 0.4384858 ]),
  'split3_test_score': array([0.30914826, 0.30283912, 0.31545741, 0.31230284, 0.31230284,
         0.31861199, 0.30283912, 0.30599369, 0.31230284, 0.31861199,
         0.30914826, 0.31545741, 0.30914826, 0.30283912, 0.31545741,
         0.31230284, 0.31230284, 0.31861199, 0.30283912, 0.30599369,
         0.31230284, 0.31861199, 0.30914826, 0.31545741, 0.30914826,
         0.30283912, 0.31545741, 0.31230284, 0.31230284, 0.32176656,
         0.30283912, 0.30599369, 0.31230284, 0.31861199, 0.31230284,
         0.31545741, 0.31230284, 0.31230284, 0.31861199, 0.31545741,
         0.30599369, 0.31861199, 0.30283912, 0.30599369, 0.31230284,
         0.31230284, 0.30914826, 0.31545741, 0.42586751, 0.43533123,
         0.42902208, 0.44164038, 0.43217666, 0.43217666, 0.43217666,
         0.4384858 , 0.42586751, 0.43533123, 0.43217666, 0.43217666,
         0.32176656, 0.31861199, 0.32807571, 0.32176656, 0.31861199,
         0.33123028, 0.31861199, 0.32492114, 0.33438486, 0.32176656,
         0.33753943, 0.34069401, 0.32176656, 0.31861199, 0.32807571,
         0.32176656, 0.31861199, 0.33123028, 0.31861199, 0.32492114,
         0.33438486, 0.32176656, 0.33753943, 0.34069401, 0.32176656,
         0.32176656, 0.33123028, 0.32492114, 0.31861199, 0.33123028,
         0.32176656, 0.32492114, 0.33123028, 0.32492114, 0.33753943,
         0.34069401, 0.32176656, 0.31861199, 0.33438486, 0.32176656,
         0.32176656, 0.33753943, 0.32176656, 0.32492114, 0.32492114,
         0.32176656, 0.33123028, 0.32492114, 0.47003155, 0.4637224 ,
         0.44479495, 0.47003155, 0.4511041 , 0.44794953, 0.4637224 ,
         0.45425868, 0.44164038, 0.4637224 , 0.45425868, 0.44479495,
         0.33123028, 0.32492114, 0.32176656, 0.32807571, 0.31230284,
         0.31545741, 0.31861199, 0.32492114, 0.32807571, 0.32492114,
         0.31545741, 0.31861199, 0.33123028, 0.32492114, 0.32176656,
         0.32807571, 0.31230284, 0.31545741, 0.31861199, 0.32492114,
         0.32807571, 0.32492114, 0.31545741, 0.31861199, 0.32176656,
         0.32492114, 0.32807571, 0.32807571, 0.31230284, 0.31545741,
         0.32492114, 0.32492114, 0.32807571, 0.32492114, 0.31545741,
         0.31861199, 0.32492114, 0.32176656, 0.33123028, 0.33123028,
         0.31230284, 0.32492114, 0.31861199, 0.31230284, 0.33123028,
         0.32807571, 0.32176656, 0.32807571, 0.48895899, 0.46687697,
         0.4637224 , 0.48264984, 0.47634069, 0.47003155, 0.47949527,
         0.47634069, 0.46687697, 0.48264984, 0.47634069, 0.46687697]),
  'split4_test_score': array([0.36075949, 0.36075949, 0.35443038, 0.35443038, 0.35759494,
         0.35759494, 0.35759494, 0.35759494, 0.35759494, 0.35443038,
         0.36392405, 0.35759494, 0.36075949, 0.36075949, 0.35443038,
         0.35443038, 0.35759494, 0.35759494, 0.35759494, 0.35759494,
         0.35759494, 0.35443038, 0.36392405, 0.35759494, 0.36075949,
         0.36075949, 0.35443038, 0.35443038, 0.35759494, 0.35759494,
         0.35759494, 0.35759494, 0.35759494, 0.35443038, 0.36392405,
         0.35759494, 0.36392405, 0.35126582, 0.36708861, 0.35443038,
         0.36075949, 0.36075949, 0.36075949, 0.36708861, 0.36392405,
         0.35443038, 0.36075949, 0.36392405, 0.40189873, 0.39240506,
         0.39240506, 0.39873418, 0.39240506, 0.39240506, 0.39873418,
         0.39556962, 0.39240506, 0.39873418, 0.39240506, 0.39240506,
         0.35126582, 0.35443038, 0.34177215, 0.35126582, 0.35126582,
         0.34493671, 0.34810127, 0.34493671, 0.34810127, 0.35443038,
         0.34810127, 0.33860759, 0.35126582, 0.35443038, 0.34177215,
         0.35126582, 0.35126582, 0.34493671, 0.34810127, 0.34493671,
         0.34810127, 0.35443038, 0.34810127, 0.33860759, 0.35126582,
         0.35443038, 0.34493671, 0.34810127, 0.35126582, 0.34493671,
         0.35443038, 0.34493671, 0.34810127, 0.35443038, 0.35126582,
         0.34177215, 0.35759494, 0.35126582, 0.33860759, 0.34810127,
         0.34810127, 0.34493671, 0.35126582, 0.34493671, 0.35126582,
         0.35443038, 0.34493671, 0.34177215, 0.41772152, 0.41139241,
         0.41455696, 0.41772152, 0.41455696, 0.41772152, 0.41772152,
         0.41772152, 0.41139241, 0.41772152, 0.41139241, 0.41455696,
         0.35126582, 0.35443038, 0.35759494, 0.36392405, 0.35443038,
         0.34810127, 0.35759494, 0.35759494, 0.35126582, 0.36075949,
         0.35443038, 0.35443038, 0.35126582, 0.35443038, 0.35759494,
         0.36392405, 0.35443038, 0.34810127, 0.35759494, 0.35759494,
         0.35126582, 0.36075949, 0.35443038, 0.35443038, 0.34810127,
         0.35443038, 0.35759494, 0.34810127, 0.35443038, 0.34493671,
         0.34493671, 0.35759494, 0.36075949, 0.35443038, 0.35443038,
         0.34810127, 0.35126582, 0.35759494, 0.35443038, 0.35759494,
         0.35759494, 0.35759494, 0.34493671, 0.35443038, 0.35759494,
         0.35443038, 0.35759494, 0.35126582, 0.43987342, 0.42405063,
         0.42088608, 0.43670886, 0.41772152, 0.41139241, 0.43670886,
         0.41772152, 0.41455696, 0.44303797, 0.42088608, 0.41455696]),
  'split5_test_score': array([0.33544304, 0.33544304, 0.33860759, 0.32911392, 0.33227848,
         0.33544304, 0.32911392, 0.32911392, 0.32594937, 0.32911392,
         0.33860759, 0.33227848, 0.33544304, 0.33544304, 0.33860759,
         0.32911392, 0.33227848, 0.33544304, 0.32911392, 0.32911392,
         0.32594937, 0.32911392, 0.33860759, 0.33227848, 0.33544304,
         0.33544304, 0.33544304, 0.32911392, 0.32594937, 0.33544304,
         0.32911392, 0.32911392, 0.32594937, 0.32911392, 0.33860759,
         0.33860759, 0.33227848, 0.33227848, 0.33544304, 0.32594937,
         0.32278481, 0.32594937, 0.33227848, 0.32911392, 0.32594937,
         0.32911392, 0.33860759, 0.34177215, 0.4335443 , 0.41772152,
         0.41455696, 0.44620253, 0.42721519, 0.41772152, 0.43987342,
         0.42405063, 0.42088608, 0.43037975, 0.42405063, 0.42405063,
         0.35759494, 0.35126582, 0.35126582, 0.36708861, 0.34810127,
         0.35443038, 0.36392405, 0.35443038, 0.36075949, 0.35759494,
         0.35126582, 0.35443038, 0.35759494, 0.35126582, 0.35126582,
         0.36708861, 0.34810127, 0.35443038, 0.36392405, 0.35443038,
         0.36075949, 0.35759494, 0.35126582, 0.35443038, 0.36075949,
         0.35126582, 0.35126582, 0.37025316, 0.34810127, 0.35443038,
         0.36392405, 0.35443038, 0.35759494, 0.35759494, 0.35126582,
         0.35126582, 0.36708861, 0.35443038, 0.35126582, 0.36392405,
         0.34810127, 0.35443038, 0.36392405, 0.35443038, 0.35759494,
         0.35443038, 0.35126582, 0.35126582, 0.46835443, 0.45886076,
         0.45886076, 0.46202532, 0.46518987, 0.46202532, 0.46202532,
         0.46202532, 0.46202532, 0.46202532, 0.45886076, 0.46518987,
         0.36392405, 0.36392405, 0.36708861, 0.36392405, 0.36708861,
         0.36075949, 0.36075949, 0.35126582, 0.35759494, 0.36075949,
         0.36392405, 0.36392405, 0.36392405, 0.36392405, 0.36708861,
         0.36392405, 0.36708861, 0.36075949, 0.36075949, 0.35126582,
         0.35759494, 0.36075949, 0.36392405, 0.36392405, 0.36392405,
         0.36392405, 0.35759494, 0.36075949, 0.36708861, 0.36075949,
         0.36075949, 0.36392405, 0.35759494, 0.36075949, 0.36392405,
         0.35759494, 0.35759494, 0.35759494, 0.36392405, 0.35126582,
         0.37025316, 0.36392405, 0.35759494, 0.36708861, 0.36392405,
         0.36075949, 0.35443038, 0.35759494, 0.5       , 0.48101266,
         0.49050633, 0.50316456, 0.48101266, 0.49050633, 0.50316456,
         0.48101266, 0.48734177, 0.50632911, 0.48734177, 0.48734177]),
  'split6_test_score': array([0.30379747, 0.29113924, 0.30063291, 0.29746835, 0.29113924,
         0.29746835, 0.29746835, 0.2943038 , 0.29746835, 0.29746835,
         0.29746835, 0.30063291, 0.30379747, 0.29113924, 0.30063291,
         0.29746835, 0.29113924, 0.29746835, 0.29746835, 0.2943038 ,
         0.29746835, 0.29746835, 0.29746835, 0.30063291, 0.30379747,
         0.29113924, 0.30063291, 0.29746835, 0.2943038 , 0.29746835,
         0.29746835, 0.2943038 , 0.29746835, 0.29746835, 0.29746835,
         0.30063291, 0.29746835, 0.29113924, 0.29113924, 0.29746835,
         0.2943038 , 0.29746835, 0.29746835, 0.2943038 , 0.29113924,
         0.29746835, 0.29113924, 0.30063291, 0.40822785, 0.40506329,
         0.37341772, 0.40506329, 0.39556962, 0.37341772, 0.40822785,
         0.40506329, 0.37341772, 0.41139241, 0.40822785, 0.37341772,
         0.3164557 , 0.30379747, 0.30379747, 0.30063291, 0.30696203,
         0.30696203, 0.32278481, 0.30063291, 0.30379747, 0.32278481,
         0.32278481, 0.30696203, 0.3164557 , 0.30379747, 0.30379747,
         0.30063291, 0.30696203, 0.30696203, 0.32278481, 0.30063291,
         0.30379747, 0.32278481, 0.32278481, 0.30696203, 0.3164557 ,
         0.30379747, 0.30379747, 0.30696203, 0.30379747, 0.30696203,
         0.32278481, 0.30063291, 0.30379747, 0.31962025, 0.32278481,
         0.30379747, 0.30696203, 0.30696203, 0.30379747, 0.30379747,
         0.30379747, 0.30696203, 0.3164557 , 0.30696203, 0.30379747,
         0.31962025, 0.30696203, 0.31329114, 0.4335443 , 0.42405063,
         0.42405063, 0.43037975, 0.42088608, 0.42088608, 0.43037975,
         0.42088608, 0.42088608, 0.42721519, 0.42088608, 0.41772152,
         0.34810127, 0.33227848, 0.31962025, 0.33544304, 0.33227848,
         0.33227848, 0.34177215, 0.32911392, 0.33544304, 0.32911392,
         0.32911392, 0.32594937, 0.34810127, 0.33227848, 0.31962025,
         0.33544304, 0.33227848, 0.33227848, 0.34177215, 0.32911392,
         0.33544304, 0.32911392, 0.32911392, 0.32594937, 0.34810127,
         0.34493671, 0.31962025, 0.33544304, 0.33227848, 0.33227848,
         0.34810127, 0.33227848, 0.32594937, 0.33227848, 0.32911392,
         0.33227848, 0.35126582, 0.33227848, 0.32594937, 0.33544304,
         0.33860759, 0.32594937, 0.34177215, 0.33544304, 0.32278481,
         0.34177215, 0.34177215, 0.33544304, 0.48101266, 0.47468354,
         0.46835443, 0.48734177, 0.47151899, 0.46835443, 0.48417722,
         0.4778481 , 0.46835443, 0.48417722, 0.47468354, 0.46835443]),
  'split7_test_score': array([0.37223975, 0.38801262, 0.36908517, 0.38170347, 0.3785489 ,
         0.3659306 , 0.38170347, 0.3785489 , 0.36277603, 0.38801262,
         0.38485804, 0.3659306 , 0.37223975, 0.38801262, 0.36908517,
         0.38170347, 0.3785489 , 0.3659306 , 0.38170347, 0.3785489 ,
         0.36277603, 0.38801262, 0.38485804, 0.3659306 , 0.36908517,
         0.38801262, 0.36908517, 0.3785489 , 0.3785489 , 0.3659306 ,
         0.3785489 , 0.3785489 , 0.36277603, 0.38801262, 0.3785489 ,
         0.3659306 , 0.3785489 , 0.38801262, 0.37223975, 0.38170347,
         0.38485804, 0.36908517, 0.38485804, 0.3785489 , 0.3659306 ,
         0.38485804, 0.37539432, 0.36908517, 0.44479495, 0.43217666,
         0.43533123, 0.44479495, 0.42902208, 0.43533123, 0.44479495,
         0.43533123, 0.43533123, 0.4511041 , 0.43217666, 0.43533123,
         0.38485804, 0.37539432, 0.38485804, 0.39116719, 0.38801262,
         0.37539432, 0.39432177, 0.37539432, 0.38170347, 0.39747634,
         0.37539432, 0.38170347, 0.38485804, 0.37539432, 0.38485804,
         0.39116719, 0.38801262, 0.37539432, 0.39432177, 0.37539432,
         0.38170347, 0.39747634, 0.37539432, 0.38170347, 0.38485804,
         0.37223975, 0.38170347, 0.39116719, 0.38801262, 0.37223975,
         0.39432177, 0.37539432, 0.38170347, 0.39747634, 0.36908517,
         0.39747634, 0.39747634, 0.3785489 , 0.38170347, 0.38170347,
         0.38485804, 0.38170347, 0.38801262, 0.37539432, 0.38170347,
         0.38801262, 0.3785489 , 0.38801262, 0.47003155, 0.46687697,
         0.44164038, 0.47318612, 0.4637224 , 0.4511041 , 0.47634069,
         0.46056782, 0.45425868, 0.47949527, 0.4637224 , 0.44479495,
         0.38485804, 0.39432177, 0.38170347, 0.39747634, 0.39116719,
         0.38801262, 0.38485804, 0.38485804, 0.39116719, 0.39432177,
         0.39432177, 0.39432177, 0.38485804, 0.39432177, 0.38170347,
         0.39747634, 0.39116719, 0.38801262, 0.38485804, 0.38485804,
         0.39116719, 0.39432177, 0.39432177, 0.39432177, 0.38485804,
         0.39432177, 0.38485804, 0.39747634, 0.39116719, 0.39116719,
         0.38485804, 0.38485804, 0.39116719, 0.39432177, 0.39747634,
         0.39432177, 0.39432177, 0.39116719, 0.38801262, 0.39747634,
         0.39116719, 0.39116719, 0.38801262, 0.39116719, 0.40378549,
         0.39116719, 0.39432177, 0.38170347, 0.47318612, 0.47003155,
         0.46056782, 0.47318612, 0.47003155, 0.4637224 , 0.47318612,
         0.46687697, 0.46687697, 0.47634069, 0.47003155, 0.46056782]),
  'split8_test_score': array([0.33753943, 0.34384858, 0.34069401, 0.33753943, 0.34069401,
         0.33438486, 0.33438486, 0.34069401, 0.33438486, 0.33438486,
         0.34384858, 0.33753943, 0.33753943, 0.34384858, 0.34069401,
         0.33753943, 0.34069401, 0.33438486, 0.33438486, 0.34069401,
         0.33438486, 0.33438486, 0.34384858, 0.33753943, 0.33438486,
         0.34384858, 0.34069401, 0.33753943, 0.34384858, 0.33438486,
         0.33438486, 0.34069401, 0.33438486, 0.33438486, 0.34384858,
         0.33753943, 0.33438486, 0.34384858, 0.33438486, 0.33753943,
         0.34384858, 0.33438486, 0.33753943, 0.34069401, 0.33753943,
         0.33753943, 0.34069401, 0.34069401, 0.40694006, 0.40063091,
         0.39432177, 0.40694006, 0.40378549, 0.39432177, 0.40694006,
         0.40378549, 0.39432177, 0.41009464, 0.40694006, 0.39747634,
         0.33753943, 0.33438486, 0.3533123 , 0.34069401, 0.34069401,
         0.34700315, 0.33123028, 0.34384858, 0.33753943, 0.34069401,
         0.34069401, 0.35015773, 0.33753943, 0.33438486, 0.3533123 ,
         0.34069401, 0.34069401, 0.34700315, 0.33123028, 0.34384858,
         0.33753943, 0.34069401, 0.34069401, 0.35015773, 0.33123028,
         0.33438486, 0.34700315, 0.33753943, 0.34700315, 0.34700315,
         0.33123028, 0.34384858, 0.33753943, 0.34069401, 0.34384858,
         0.35015773, 0.34069401, 0.33123028, 0.34700315, 0.33753943,
         0.35015773, 0.34700315, 0.33438486, 0.33438486, 0.33753943,
         0.34069401, 0.34384858, 0.35015773, 0.41955836, 0.42271293,
         0.41009464, 0.41955836, 0.41955836, 0.41009464, 0.41955836,
         0.42271293, 0.40694006, 0.42271293, 0.41640379, 0.41009464,
         0.32176656, 0.33123028, 0.32807571, 0.32807571, 0.32176656,
         0.32807571, 0.33123028, 0.33438486, 0.33123028, 0.33438486,
         0.32807571, 0.32807571, 0.32176656, 0.33123028, 0.32807571,
         0.32807571, 0.32176656, 0.32807571, 0.33123028, 0.33438486,
         0.33123028, 0.33438486, 0.32807571, 0.32807571, 0.32176656,
         0.32492114, 0.32807571, 0.32492114, 0.32492114, 0.33438486,
         0.33123028, 0.33438486, 0.33123028, 0.33438486, 0.32176656,
         0.32807571, 0.32807571, 0.32807571, 0.32807571, 0.32807571,
         0.33753943, 0.33123028, 0.32492114, 0.32176656, 0.33753943,
         0.33753943, 0.32176656, 0.33123028, 0.42271293, 0.41009464,
         0.39432177, 0.42586751, 0.41009464, 0.41009464, 0.42271293,
         0.40694006, 0.40063091, 0.42586751, 0.41009464, 0.40063091]),
  'split9_test_score': array([0.37025316, 0.37341772, 0.37658228, 0.36708861, 0.37025316,
         0.37025316, 0.36392405, 0.36075949, 0.37658228, 0.36708861,
         0.36075949, 0.37658228, 0.37025316, 0.37341772, 0.37658228,
         0.36708861, 0.37025316, 0.37025316, 0.36392405, 0.36075949,
         0.37658228, 0.36708861, 0.36075949, 0.37658228, 0.36708861,
         0.37341772, 0.37658228, 0.36708861, 0.37025316, 0.37025316,
         0.36708861, 0.36075949, 0.37658228, 0.36708861, 0.36075949,
         0.37658228, 0.36392405, 0.37025316, 0.37658228, 0.36708861,
         0.37025316, 0.37658228, 0.36708861, 0.36392405, 0.37658228,
         0.37658228, 0.36392405, 0.37974684, 0.4778481 , 0.47151899,
         0.46835443, 0.4778481 , 0.47151899, 0.46518987, 0.48101266,
         0.46835443, 0.46518987, 0.4778481 , 0.46835443, 0.46835443,
         0.36075949, 0.37658228, 0.37974684, 0.36075949, 0.37341772,
         0.38291139, 0.36392405, 0.37341772, 0.38291139, 0.36075949,
         0.37341772, 0.37658228, 0.36075949, 0.37658228, 0.37974684,
         0.36075949, 0.37341772, 0.38291139, 0.36392405, 0.37341772,
         0.38291139, 0.36075949, 0.37341772, 0.37658228, 0.36075949,
         0.37658228, 0.38607595, 0.36392405, 0.37341772, 0.37658228,
         0.36392405, 0.37341772, 0.38291139, 0.36075949, 0.37341772,
         0.37658228, 0.36708861, 0.37025316, 0.38291139, 0.36075949,
         0.37025316, 0.38291139, 0.36392405, 0.37025316, 0.38291139,
         0.36392405, 0.37025316, 0.37658228, 0.50632911, 0.5       ,
         0.49367089, 0.50316456, 0.50632911, 0.49050633, 0.50316456,
         0.5       , 0.49367089, 0.5       , 0.50949367, 0.5       ,
         0.37341772, 0.37341772, 0.38291139, 0.37341772, 0.37658228,
         0.38291139, 0.37974684, 0.37341772, 0.37658228, 0.37658228,
         0.37974684, 0.38924051, 0.37341772, 0.37341772, 0.38291139,
         0.37341772, 0.37658228, 0.38291139, 0.37974684, 0.37341772,
         0.37658228, 0.37658228, 0.37974684, 0.38924051, 0.37025316,
         0.37341772, 0.37341772, 0.37658228, 0.37658228, 0.38291139,
         0.37974684, 0.37341772, 0.37658228, 0.37658228, 0.37974684,
         0.38924051, 0.37341772, 0.37341772, 0.38291139, 0.37974684,
         0.37025316, 0.37974684, 0.37658228, 0.37658228, 0.37974684,
         0.37341772, 0.37025316, 0.37658228, 0.51898734, 0.50632911,
         0.50632911, 0.51898734, 0.50632911, 0.50316456, 0.51582278,
         0.50949367, 0.49683544, 0.51898734, 0.50949367, 0.5       ]),
  'mean_test_score': array([0.34736353, 0.34957373, 0.34894581, 0.34704408, 0.34641317,
         0.34768099, 0.34578225, 0.3454648 , 0.34451943, 0.34957174,
         0.34767799, 0.34704808, 0.34736353, 0.34957373, 0.34894581,
         0.34704408, 0.34641317, 0.34768099, 0.34578225, 0.3454648 ,
         0.34451943, 0.34957174, 0.34767799, 0.34704808, 0.34736553,
         0.34894282, 0.34957773, 0.34672863, 0.34672763, 0.3483129 ,
         0.34578325, 0.34578126, 0.34483588, 0.34862437, 0.34736254,
         0.34768099, 0.347677  , 0.34672463, 0.34736353, 0.34735954,
         0.34767799, 0.34704608, 0.34894282, 0.34704908, 0.34641616,
         0.34894182, 0.34673062, 0.34989418, 0.42984766, 0.42542427,
         0.41941561, 0.43142595, 0.42510781, 0.42099589, 0.43047958,
         0.4266881 , 0.41973406, 0.43174041, 0.42605618, 0.42131334,
         0.35494949, 0.35115901, 0.3527323 , 0.35368067, 0.35084055,
         0.35368566, 0.35526494, 0.35273629, 0.35494949, 0.35747514,
         0.35589786, 0.35620732, 0.35494949, 0.35115901, 0.3527323 ,
         0.35368067, 0.35084055, 0.35368566, 0.35526494, 0.35273629,
         0.35494949, 0.35747514, 0.35589786, 0.35620732, 0.35368566,
         0.35052809, 0.35399812, 0.35463004, 0.35147147, 0.35273729,
         0.35621331, 0.35210538, 0.35431757, 0.35747414, 0.35526594,
         0.3571527 , 0.35620932, 0.3505251 , 0.35399513, 0.35305075,
         0.35178393, 0.35431558, 0.35526494, 0.35115601, 0.35526894,
         0.35589686, 0.35368267, 0.35589486, 0.46050793, 0.45513417,
         0.44850956, 0.45987402, 0.45419079, 0.44724274, 0.45987601,
         0.45450625, 0.44850956, 0.45924011, 0.45513916, 0.44851056,
         0.35716767, 0.35716468, 0.35811704, 0.36032524, 0.35590784,
         0.35748513, 0.3584315 , 0.35716667, 0.35969433, 0.36190452,
         0.35558839, 0.35937987, 0.35716767, 0.35716468, 0.35811704,
         0.36032524, 0.35590784, 0.35748513, 0.3584315 , 0.35716667,
         0.35969433, 0.36190452, 0.35558839, 0.35937987, 0.35558839,
         0.35811604, 0.35621631, 0.35906042, 0.35653875, 0.3584315 ,
         0.35811305, 0.35748313, 0.35969433, 0.36158807, 0.35653676,
         0.35874696, 0.35811205, 0.35526994, 0.35937887, 0.35874396,
         0.35938186, 0.35874496, 0.35779859, 0.35559138, 0.36190253,
         0.3606397 , 0.35653676, 0.35779759, 0.47853193, 0.46810486,
         0.46337   , 0.47948129, 0.46904824, 0.46431238, 0.47885138,
         0.46905123, 0.46304955, 0.47979775, 0.47063052, 0.46431438]),
  'std_test_score': array([0.03076727, 0.03766012, 0.03337361, 0.03335951, 0.03530637,
         0.03354587, 0.03445229, 0.03495667, 0.03150225, 0.03477407,
         0.03521249, 0.03152982, 0.03076727, 0.03766012, 0.03337361,
         0.03335951, 0.03530637, 0.03354587, 0.03445229, 0.03495667,
         0.03150225, 0.03477407, 0.03521249, 0.03152982, 0.03137062,
         0.03832619, 0.03373179, 0.03304369, 0.03471667, 0.03353911,
         0.03431874, 0.03553299, 0.0318054 , 0.03511748, 0.03425743,
         0.0312896 , 0.03215658, 0.03536657, 0.03324008, 0.03373278,
         0.03855255, 0.03188908, 0.03568132, 0.03800204, 0.0350042 ,
         0.03505136, 0.03509023, 0.03236774, 0.02403786, 0.02848386,
         0.03056153, 0.02566129, 0.02791648, 0.03094497, 0.0254407 ,
         0.02786362, 0.03094264, 0.02443482, 0.02650271, 0.03135501,
         0.0314373 , 0.03241449, 0.03342066, 0.03255427, 0.03398713,
         0.03327853, 0.03078215, 0.03353395, 0.03335409, 0.03131573,
         0.02995179, 0.03210121, 0.0314373 , 0.03241449, 0.03342066,
         0.03255427, 0.03398713, 0.03327853, 0.03078215, 0.03353395,
         0.03335409, 0.03131573, 0.02995179, 0.03210121, 0.03038392,
         0.03276366, 0.03274387, 0.03167466, 0.03498595, 0.03257102,
         0.03032243, 0.03439636, 0.03351701, 0.03133845, 0.0287987 ,
         0.03446382, 0.03147677, 0.03071304, 0.03159705, 0.03141552,
         0.0325056 , 0.03361026, 0.03092038, 0.03057485, 0.03545561,
         0.03104683, 0.03239938, 0.03329561, 0.0303491 , 0.03121328,
         0.03243613, 0.03052538, 0.03198105, 0.03138301, 0.03155349,
         0.03081876, 0.03472528, 0.02986269, 0.03427879, 0.03459657,
         0.02887339, 0.03015294, 0.03245345, 0.03022366, 0.03406297,
         0.0326731 , 0.02807128, 0.03183181, 0.03184856, 0.03136835,
         0.03316561, 0.03343096, 0.02887339, 0.03015294, 0.03245345,
         0.03022366, 0.03406297, 0.0326731 , 0.02807128, 0.03183181,
         0.03184856, 0.03136835, 0.03316561, 0.03343096, 0.02974359,
         0.03041926, 0.03104897, 0.03142398, 0.03329934, 0.03304265,
         0.02698513, 0.0296013 , 0.03259448, 0.03113802, 0.03387051,
         0.03297638, 0.02861764, 0.03037365, 0.03167619, 0.03122777,
         0.03207318, 0.03017116, 0.03099497, 0.03307604, 0.03301463,
         0.02828076, 0.03223849, 0.02850154, 0.0294516 , 0.03187621,
         0.03552738, 0.02950542, 0.03220728, 0.03400273, 0.03073945,
         0.03346521, 0.03456186, 0.02878023, 0.03236713, 0.0347532 ]),
  'rank_test_score': array([154, 135, 139, 163, 170, 146, 173, 176, 179, 137, 150, 160, 154,
         135, 139, 163, 170, 146, 173, 176, 179, 137, 150, 160, 153, 141,
         134, 166, 167, 145, 172, 175, 178, 144, 157, 146, 152, 168, 154,
         158, 149, 162, 141, 159, 169, 143, 165, 133,  28,  31,  36,  26,
          32,  34,  27,  29,  35,  25,  30,  33, 102, 126, 121, 115, 129,
         111,  99, 119, 102,  68,  88,  84, 102, 126, 121, 115, 129, 111,
          99, 119, 102,  68,  88,  84, 111, 131, 109, 106, 125, 118,  82,
         123, 107,  70,  98,  77,  83, 132, 110, 117, 124, 108,  99, 128,
          97,  90, 114,  91,  13,  18,  22,  15,  20,  24,  14,  19,  22,
          16,  17,  21,  71,  75,  58,  42,  86,  65,  55,  73,  44,  37,
          93,  48,  71,  75,  58,  42,  86,  65,  55,  73,  44,  37,  93,
          48,  93,  60,  81,  51,  78,  55,  61,  67,  44,  40,  79,  52,
          62,  96,  50,  54,  47,  53,  63,  92,  39,  41,  79,  64,   4,
           8,  11,   2,   7,  10,   3,   6,  12,   1,   5,   9])},
 {'XGBoost__colsample_bytree': 1.0,
  'XGBoost__reg_alpha': 100,
  'XGBoost__reg_lambda': 1.5,
  'XGBoost__subsample': 0.75},
 0.47979774787365737)

Finally! Adjust the learning rate and add more trees.¶

In [326]:
xgb4 = XGBClassifier(
    eval_metric='auc',              # AUC is a robust evaluation metric for imbalanced data
    use_label_encoder=False,        # Advoid warnings (required for older versions)
    
    n_estimators=1000,               # Number of trees,Select a moderate initial value, I selected a moderate initial value
    learning_rate=0.01,             # Connservative learning rate to avoid overfitting
    max_depth=3,                    # Maximum depth of each tree to control complexity
    min_child_weight=1,             # Minimum sum of instance weight (hessian) in a child
    gamma=0.1,                      # Minimum loss reduction required to make a further partition
    subsample=0.75,                  # Subsample ratio of the training instances for each tree
    colsample_bytree=1.0,          # Subsample ratio of columns when constructing each tree, using 0.8 as initial value
    n_jobs = -1,                    # Use all available cores for parallel processing
    reg_alpha=100,                # L1 regularization to encourage sparsity
    reg_lambda=1.5,
    seed=42)        # random seed
In [198]:
ppl.named_steps['XGBoost'].set_params(
   eval_metric='auc',              # AUC is a robust evaluation metric for imbalanced data
    use_label_encoder=False,        # Advoid warnings (required for older versions)
    
    n_estimators=1000,               # Number of trees,Select a moderate initial value, I selected a moderate initial value
    learning_rate=0.01,             # Connservative learning rate to avoid overfitting
    max_depth=3,                    # Maximum depth of each tree to control complexity
    min_child_weight=1,             # Minimum sum of instance weight (hessian) in a child
    gamma=0.1,                      # Minimum loss reduction required to make a further partition
    subsample=0.75,                  # Subsample ratio of the training instances for each tree
    colsample_bytree=1.0,          # Subsample ratio of columns when constructing each tree, using 0.8 as initial value
    n_jobs = -1,                    # Use all available cores for parallel processing
    reg_alpha=100,                # L1 regularization to encourage sparsity
    reg_lambda=1.5,
    seed=42)  # This can solve the problem of code redundancy. However, to prevent errors, I still choose to re-copy the new pipeline
Out[198]:
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.1, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.05, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=4, max_leaves=None,
              min_child_weight=5, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=500, n_jobs=-1,
              num_parallel_tree=None, random_state=42, ...)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
XGBClassifier(base_score=None, booster=None, callbacks=None,
              colsample_bylevel=None, colsample_bynode=None,
              colsample_bytree=0.8, device=None, early_stopping_rounds=None,
              enable_categorical=False, eval_metric='auc', feature_types=None,
              gamma=0.1, grow_policy=None, importance_type=None,
              interaction_constraints=None, learning_rate=0.05, max_bin=None,
              max_cat_threshold=None, max_cat_to_onehot=None,
              max_delta_step=None, max_depth=4, max_leaves=None,
              min_child_weight=5, missing=nan, monotone_constraints=None,
              multi_strategy=None, n_estimators=500, n_jobs=-1,
              num_parallel_tree=None, random_state=42, ...)
In [327]:
ppl4 = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
                            #PCA(n_components =0.95)),
              SimpleImputer(strategy='median') ), 
         # It has been operated before. Theoretically, the missing fill is encapsulated hereStandardScaler(), num_features),
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('smote', SMOTE(random_state=42)),
       # 4. Modelling
    ('XGBoost', xgb4)
])
In [331]:
X_final = ppl4.fit(X_train_agg, y_train_agg)

TASK 4¶

In [332]:
fig, ax = plt.subplots(figsize=(3, 3))
ConfusionMatrixDisplay.from_estimator(ppl4, X_test_agg, y_test_agg, ax=ax)
plt.title("Confusion Matrix")
plt.grid(False)
plt.show()
No description has been provided for this image
In [333]:
y_pred_final = ppl4.predict(X_test_agg)
In [334]:
model_names = ['XGBoost(trained) + SMOTE']
xgb_scores1 = evaluate_model(y_test_agg, [y_pred_final], model_names)
print(xgb_scores1)
                      Model    Recall  Precision  F1 Score  Accuracy
0  XGBoost(trained) + SMOTE  0.412821   0.248841  0.310511  0.867298
In [335]:
#xgboost = XGBClassifier(n_estimators=150, use_label_encoder=False, scale_pos_weight=12, eval_metric="aucpr", verbosity=1, disable_default_eval_metric=1)
cv_xgboost = cross_validate(ppl4, X_train_agg, y_train_agg, cv=cv, scoring="f1", return_train_score=True, verbose=1)
cv_xgboost
Out[335]:
{'fit_time': array([6.3325274 , 7.47797966, 6.24012613, 7.48114157, 6.2661171 ,
        7.31904078, 6.49614215, 7.47311616, 6.34808135, 7.40204835]),
 'score_time': array([0.03522062, 0.03600883, 0.0334053 , 0.0378747 , 0.03501558,
        0.03862357, 0.03795218, 0.03432441, 0.03986788, 0.03514814]),
 'test_score': array([0.29190422, 0.26804124, 0.25681818, 0.28476085, 0.28400955,
        0.26879271, 0.27464009, 0.28538012, 0.27570093, 0.28416486]),
 'train_score': array([0.30359147, 0.30277186, 0.3076492 , 0.30683403, 0.29416935,
        0.30182421, 0.30655691, 0.30337397, 0.29687939, 0.31168831])}
In [377]:
cv_xgboost1 = cross_validate(ppl4, X_train_agg, y_train_agg, cv=cv, scoring="recall", return_train_score=True, verbose=1)
cv_xgboost1
Out[377]:
{'fit_time': array([7.61636877, 6.27397704, 7.69671702, 6.23548198, 6.29793739,
        7.36405849, 6.14337111, 7.56082296, 6.31843615, 7.599159  ]),
 'score_time': array([0.03871989, 0.03743315, 0.03648615, 0.0313859 , 0.03738499,
        0.0339191 , 0.03651953, 0.03532767, 0.04051113, 0.03536582]),
 'test_score': array([0.40506329, 0.41139241, 0.35646688, 0.40378549, 0.37658228,
        0.37341772, 0.39240506, 0.38485804, 0.37223975, 0.41455696]),
 'train_score': array([0.42733017, 0.44865719, 0.43399209, 0.43478261, 0.39652449,
        0.43127962, 0.42654028, 0.4229249 , 0.4173913 , 0.42654028])}
In [353]:
# Obtain the preprocessed feature matrix (only get the data finally input into XGBoost)
X_train_transformed = ppl4.named_steps['cleaning'].transform(X_train_agg)

# Obtain the trained XGBoost model
model = ppl4.named_steps['XGBoost']
In [354]:
# Retrieve the ColumnTransformer object
coltrans = ppl4.named_steps['cleaning']

# Obtain the selected values and category column names
num_features = coltrans.transformers_[0][2]  # 第一个 transformer 是数值部分
cat_features = coltrans.transformers_[1][2]  # 第二个 transformer 是类别部分
In [355]:
num_features = coltrans.transformers_[0][2]  # already list of column names

# The name after expanding the category column: Use the get_feature_names_out() of ColumnTransformer itself
feature_names = coltrans.get_feature_names_out()
In [356]:
## import shap
explainer = shap.Explainer(model)

# Calculate the SHAP value
shap_values = explainer(X_train_transformed)
shap.summary_plot(shap_values, X_train_transformed, feature_names=feature_names)
No description has been provided for this image
In [379]:
X_train_agg.columns
Out[379]:
Index(['Patient_id', 'Alkalinephos', 'BUN', 'BaseExcess', 'Bilirubin_total',
       'Calcium', 'Chloride', 'Creatinine', 'FiO2', 'HR', 'Hgb', 'MAP',
       'Magnesium', 'O2Sat', 'Platelets', 'Temp', 'WBC', 'pH', 'Age',
       'Gender'],
      dtype='object')
In [352]:
fig,ax = plt.subplots(figsize=(25,15))
plot_importance(xgb4,ax=ax)
Out[352]:
<Axes: title={'center': 'Feature importance'}, xlabel='F score', ylabel='Features'>
No description has been provided for this image

The feature importance was visualized using the shap value, and the result was similar to the feature importance result that comes with xgboost.

In [380]:
shap.initjs()  
No description has been provided for this image
In [381]:
i = 0 # The first one

# Display the force plot to explain why the model predicts a certain value for the i-th sample
shap.plots.force(shap_values[i])
Out[381]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [383]:
shap.force_plot(explainer.expected_value, shap_values.values[i], matplotlib=True,feature_names=feature_names)
plt.show()
No description has been provided for this image
In [384]:
i = 0
print(f"True label of sample {i}:", y_train_agg.iloc[i])
True label of sample 0: 0

While, based on the local prediction explanation in Figure 9, the patient himself/herself did not have spesis, but the predicted risk probability of the disease was 0.72. This showed that this model still need to improve.

I wish to examine various methods of addressing imbalance.

In [361]:
# imbalanced
ppl5 = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
                            #PCA(n_components =0.95)),
              SimpleImputer(strategy='median') ), 
         # It has been operated before. Theoretically, the missing fill is encapsulated hereStandardScaler(), num_features),
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('smote', SMOTETomek(tomek=TomekLinks(sampling_strategy='majority'))),
       # 4. Modelling
    ('XGBoost', xgb4)
])
In [362]:
X_final_tomek = ppl5.fit(X_train_agg, y_train_agg)
In [368]:
y_pred_final_tomek = ppl5.predict(X_test_agg)
model_names = ['XGBoost(trained) + SMOTE_tomek']
xgb_scores1 = evaluate_model(y_test_agg, [y_pred_final_tomek], model_names)
print(xgb_scores1)
                            Model    Recall  Precision  F1 Score  Accuracy
0  XGBoost(trained) + SMOTE_tomek  0.658974   0.138619  0.229055  0.678916

The model fitting effect of the combination of oversampling and undersampling is the best. However, due to the aforementioned analysis, the interpretability and reliability of this model are not high. No further adjustments will be made in this case.

In [374]:
# imbalanced
ppl6 = Pipeline([
    # 1. Data cleaning
    ('drop_id', DropFeatures(['Patient_id'])),
    ('drop_duplicates', DropDuplicateFeatures()),
    # missing value,impute
    ('cleaning', ColumnTransformer([
        # 2.1: num
        ('num',make_pipeline( StandardScaler(),
                            #PCA(n_components =0.95)),
              SimpleImputer(strategy='median') ), 
         # It has been operated before. Theoretically, the missing fill is encapsulated hereStandardScaler(), num_features),
         make_column_selector(dtype_include='float64')
        ),
        # 2.2:cat
        ('cat',make_pipeline(
            # SimpleImputer(strategy='most_frequent'),
            OneHotEncoder(sparse=False, handle_unknown='ignore')),
         make_column_selector(dtype_include='category')
        )])
    ),
       # 3. imbalanced
     ('under', RandomUnderSampler(random_state=42)),
       # 4. Modelling
    ('XGBoost', xgb4)
])
In [375]:
X_final_under = ppl6.fit(X_train_agg, y_train_agg)
In [376]:
y_pred_final_under = ppl6.predict(X_test_agg)
model_names = ['XGBoost(trained) + under']
xgb_scores12 = evaluate_model(y_test_agg, [y_pred_final_under], model_names)
print(xgb_scores12)
                      Model    Recall  Precision  F1 Score  Accuracy
0  XGBoost(trained) + under  0.658974   0.138619  0.229055  0.678916
In [ ]:
df['SepsisLabel'].value_counts()

Model Strengths¶

  1. Complete and Reproducible XGBoost Pipeline
    I successfully built a full XGBoost classification pipeline covering data cleaning, preprocessing, resampling (SMOTE), and modeling, which is modular and easy to reu se.

  2. Hyperparameter Optimizatio n
    I used Grid Search combined with K-Fold cross-validation to fine-tune hyperparameters, improving model performance step by step.

  3. Proper Handling of Data Lea kage
    I split the data into training and test sets before data exploration, which, though conservative, effectively prevents information leakage.

  4. Regularization and Overfitting Control
    I included regularization (reg_alpha, reg_lambda) and sampling parameters (subsample, colsample_bytree) in the model design to reduce overfitting and improve gener ining to better diagnose underfitting or overfitting trends.

Model Weaknesses¶

  1. Limited Resampling Techniques
    More robust methods like Random Undersampling, SMOTE-Tomek, or ADASYN were not attempted.

  2. Insufficient Feature Engineering
    I did not create meaningful interaction or domain-specific features, nor applied advanced feature selection strategies.

  3. Simple Imputation Strategy
    I used median imputation, which compresses the interquartile range (IQR) and can potentially distort the variable distribution, affecting model fitting.

  4. Inefficient Hyperparameter Tuning
    Due to hardware constraints, I adopted a step-by-step grid search with limited parameter combinations, resulting in redundant code and suboptimal coverage. Methods like Random Search or Bayesian Optimization could have been more efficient.

  5. Limited Performance Gain After Tuning
    Despite multiple tuning iterations, I found that performance improvement was minimal, this was not good for predicting. In my view, the possible reasons may include:

    • Model complexity too low (max_depth too small);
    • Feature redundancy or multicollinearity;
    • Lack of exploration of parameter interactions.
  6. Model interpretability
    From the perspective of SHAP feature ranking and local predictive analysis, the performance of this model in practical applications is relatively average. The results have certain reference value, but they still need to be interpreted with caution. Although this project has completed a relatively complete machine learning modeling process, from the perspectives of interpretability and practicality, the limitations of the black-box model in medical scenarios are still exposed.

Key Reflections¶

  • Model quality depends more on features than tuning.
    Hyperparameter optimization cannot compensate for weak feature engineering. Better variable design based on domain understanding is crucial.

  • Balance between Recall and Precision matters.
    In imbalanced datasets, optimizing only Recall or Precision can lead to impractical models. Metrics like F1-score and ROC-AUC provide a better trade-off.

  • Grid Search is exhaustive but inefficient.
    Alternative methods such as Random Search or Bayesian Optimization should be considered for larger parameter spaces.

  • Innovative features must be validated.
    Although I attempted to create new acid-base status indicators, they did not improve model performance, showing that feature design needs to be both theoretically sound and data-driven.

Future Improvements¶

  • Try advanced models like LightGBM, CatBoost, or Stacking to explore performance gains.
  • Apply feature selection methods like RFE (Recursive Feature Elimination) or feature importance pruning.
  • I could try to use robust imbalance handling techniques like SMOTE-Tomek.
  • Engineer new interaction or domain-based features to enrich the input space.
  • Use early stopping and visualization during training to better diagnose underfitting or overfitting trends.
In [ ]:
 
In [214]:
#smote_pipeline = make_pipeline(
#    SMOTE(random_state=42),
#    XGBClassifier(n_estimators=150, use_label_encoder=False, scale_pos_weight=12, eval_metric="aucpr", verbosity=1, disable_default_eval_metric=1) 
#
#)
#param_grid = {
#    'pca__n_components': [5, 10, 15, 20, 25, 30],
#    'model__max_depth': [2, 3, 5, 7, 10],
#    'model__n_estimators': [10, 100, 500],
#}
#grid = GridSearchCV(pipeline, param_grid, cv=5, n_jobs=-1, scoring='roc_auc')
In [215]:
#score3 = cross_val_score(smote_pipeline, X_train_model, y_train_agg, scoring='recall', cv=2)
#print("Cross Validation Recall Scores are: {}".format(score3))
#print("Average Cross Validation Recall score: {}".format(score3.mean()))
Cross Validation Recall Scores are: [0.77243995 0.23767383]
Average Cross Validation Recall score: 0.5050568900126422
In [218]:
#params = {"n_estimators": [150, 200],"max_delta_step": [0.1], "subsample": [None, 0.5, 1], "reg_lambda": [1, 1.1], "alpha": [0, 0.1]}
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]: